diff --git a/abstractivesummarizationguidedbylatenthierarchicaldocumentstructure/e750f412-b989-43ea-8c30-4b219c7fdc6b_content_list.json b/abstractivesummarizationguidedbylatenthierarchicaldocumentstructure/e750f412-b989-43ea-8c30-4b219c7fdc6b_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..eb7403c5b044c6959f013bad7a29ff23f56538a2 --- /dev/null +++ b/abstractivesummarizationguidedbylatenthierarchicaldocumentstructure/e750f412-b989-43ea-8c30-4b219c7fdc6b_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:58aae37a7b715fbb48bc51ca71d78bc14cfbfce62358ebdf19111da90b559a4f +size 104506 diff --git a/abstractivesummarizationguidedbylatenthierarchicaldocumentstructure/e750f412-b989-43ea-8c30-4b219c7fdc6b_model.json b/abstractivesummarizationguidedbylatenthierarchicaldocumentstructure/e750f412-b989-43ea-8c30-4b219c7fdc6b_model.json new file mode 100644 index 0000000000000000000000000000000000000000..103e7f6e8f1ca2953e8ba9d550b83ddedf7b727e --- /dev/null +++ b/abstractivesummarizationguidedbylatenthierarchicaldocumentstructure/e750f412-b989-43ea-8c30-4b219c7fdc6b_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fa5eafd4509e0a379823faa94e240fc216aa71e4d4a8ee747c127d77ca435438 +size 127132 diff --git a/abstractivesummarizationguidedbylatenthierarchicaldocumentstructure/e750f412-b989-43ea-8c30-4b219c7fdc6b_origin.pdf b/abstractivesummarizationguidedbylatenthierarchicaldocumentstructure/e750f412-b989-43ea-8c30-4b219c7fdc6b_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..d2e50bc741f9d56186d4a739a17b3641bb227173 --- /dev/null +++ b/abstractivesummarizationguidedbylatenthierarchicaldocumentstructure/e750f412-b989-43ea-8c30-4b219c7fdc6b_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d78152818c46592f2b5f0b5f3a905a98ec25ec8c256148b131931c6c0ecddb0b +size 782253 diff --git a/abstractivesummarizationguidedbylatenthierarchicaldocumentstructure/full.md b/abstractivesummarizationguidedbylatenthierarchicaldocumentstructure/full.md new file mode 100644 index 0000000000000000000000000000000000000000..98fdd0096f73e50b521b889e95ef36c342cf2dd4 --- /dev/null +++ b/abstractivesummarizationguidedbylatenthierarchicaldocumentstructure/full.md @@ -0,0 +1,399 @@ +# Abstractive Summarization Guided by Latent Hierarchical Document Structure + +Yifu Qiu Shay B. Cohen + +Institute for Language, Cognition and Computation + +School of Informatics, University of Edinburgh + +10 Crichton Street, Edinburgh, EH8 9AB + +Y.QIU-20@sms.ed.ac.uk, scohen@inf.ed.ac.uk + +# Abstract + +Sequential abstractive neural summarizers often do not use the underlying structure in the input article or dependencies between the input sentences. This structure is essential to integrate and consolidate information from different parts of the text. To address this shortcoming, we propose a hierarchy-aware graph neural network (HierGNN) which captures such dependencies through three main steps: 1) learning a hierarchical document structure through a latent structure tree learned by a sparse matrix-tree computation; 2) propagating sentence information over this structure using a novel message-passing node propagation mechanism to identify salient information; 3) using graph-level attention to concentrate the decoder on salient information. Experiments confirm HierGNN improves strong sequence models such as BART, with a 0.55 and 0.75 margin in average ROUGE-1/2/L for CNN/DM and XSum. Further human evaluation demonstrates that summaries produced by our model are more relevant and less redundant than the baselines, into which HierGNN is incorporated. We also find HierGNN synthesizes summaries by fusing multiple source sentences more, rather than compressing a single source sentence, and that it processes long inputs more effectively. $^{1}$ + +# 1 Introduction + +Sequential neural network architectures in their various forms have become the mainstay in abstractive summarization (See et al., 2017; Lewis et al., 2020). However, the quality of machine-produced summaries still lags far behind the quality of human summaries (Huang et al., 2020a; Xie et al., 2021; Cao et al., 2022; Lebanoff et al., 2019). Due to their sequential nature, a challenge with neural summarizers is to capture hierarchical and inter-sentential dependencies in the summarized document. + +# Article Sentences: + +1. The town is home to the prestigious Leander Club, which has trained more than 100 Olympic medal-winning rowers. +- 2 sentences are abbreviated here. +4. The Royal Mail has painted more than 50 postboxes gold following Team GB's gold medal haul at London 2012. +5. Originally it said it was only painting them in winners hometowns, or towns with which they are closely associated. +6. Town mayor Elizabeth Hodgkin said: "We are the home of rowing ... I feel very excited about it." +- 5 sentences are abbreviated here. +12. The Henley-on-Thames postbox was painted on Friday. +- one sentence is abbreviated here. + +Reference Summary: The Royal Mail has painted a postbox gold in the Oxfordshire town of Henley-on-Thames - in recognition of its medal winning rowing club. + +BART's Summary: A postbox in Henley-on-Thames has been painted gold as part of the Royal Mail's "Olympic gold" campaign. + +Our HierGNN's Summary: A Royal Mail postbox in Henley-on-Thames has been painted gold in honour of the town's Olympic rowing success. + +Table 1: Example of an article from XSum with summaries given by human-written reference, BART (Lewis et al., 2020) and our HierGNN equipped with BART. BART's summary fails to capture all information pieces as the reference (as highlighted in various colors), while HierGNN has advantages in combining the information from multiple locations in the source side. + +Progress in cognitive science suggests that humans construct and reason over a latent hierarchical structure of a document when reading the text in it (Graesser et al., 1994; Goldman et al., 1999). Such reasoning behavior includes uncovering the salient contents and effectively aggregating all related clues spreading across the documents to understand the document. Lebanonoff et al. (2019) found that human editors usually prefer writing a summary by fusing information from multiple article sentences and reorganizing the information in summaries (sentence fusion), rather than dropping non-essential elements in an original sentence such as prepositional phrases and adjectives (sentence compression). Different summarization + +benchmarks show there are between $60 - 85\%$ summary sentences that are generated by sentence fusing. These recent findings support our motivation to make use of hierarchical document structure when summarizing a document. + +We present a document hierarchy-aware graph neural network (HierGNN), a neural encoder with a reasoning functionality that can be effectively incorporated into any sequence-to-sequence (seq2seq) neural summarizer. Our HierGNN first learns a latent hierarchical graph via a sparse variant of the matrix-tree computation (Koo et al., 2007; Liu et al., 2019a). It then formulates sentence-level reasoning as a graph propagation problem via a novel message passing mechanism. During decoding, a graph-selection attention mechanism serves as a source sentence selector, hierarchically indicating the attention module which tokens in the input sentences to focus on. + +Our experiments with HierGNN, incorporated into both pointer-generator networks (See et al., 2017) and BART (Lewis et al., 2020), confirm that HierGNN substantially improves both the non-pretrained and pretrained seq2seq baselines in producing high-quality summaries. Specifically, our best HierGNN-BART achieves an average improvement of 0.55 and 0.75 points in ROUGE-1/2/L on CNN/DM and XSum. Compared with a plain seq2seq model, HierGNN encourages the summarizers to favor sentence fusion more than sentence compression when generating summaries. Modeling the hierarchical document structure via our sparse matrix-tree computation also enables HierGNN to treat long sequences more effectively. In addition, our sparse adaptive variant of the matrix-tree computation demonstrates a more powerful expressive ability over the original one (Koo et al., 2007; Liu et al., 2019a). We summarize our contributions as follows, + +- We present a novel encoder architecture for improving seq2seq summarizers. This architecture captures the hierarchical document structure via an adaptive sparse matrix-tree computation, with a new propagation rule for achieving intersentence reasoning. +- We design a graph-selection attention mechanism to fully leverage the learned structural information during decoding in advantages over only using it in encoding. +- Results on CNN/DM and XSum demonstrates the effectiveness of HierGNN in improving the + +quality of summaries for both non-pretrained and pretrained baselines. An in-depth analysis confirms our module improves the integration of information from multiple sites in the input article and that it is more effective in processing long sequence inputs. + +# 2 Related Work + +Neural Abstractive Summarization Rush et al. (2015) first proposed to use a sequence-to-sequence model with an attention mechanism to perform sentence compression. Mendes et al. (2019) demonstrated the advantages and limitations of neural methods based on sentence compression. The pointer-generator networks (PGN; See et al. 2017) enhances the attention model with a copying functionality. PGN has also been further extended to create summarization systems by incorporating the topic information (Liu et al., 2019b), document structural information (Song et al., 2018), semantic information (Hardy and Vlachos, 2018), and was improved by replacing the plain LSTM module with the more advanced Transformer model to overcome the difficulty in modeling long sequence input (Pilault et al., 2020; Wang et al., 2021; Fonseca et al., 2022). For the pretrained models, BERTSum (Liu and Lapata, 2019) adopted the BERT encoder for the summarizer, with a randomly initialized decoder. Lewis et al. (2020) presented BART which pre-trains both the underlying encoder and decoder. Dou et al. (2021) investigated "guidance signals" (e.g., keywords, salient sentences) for further boosting the performances. + +Graph Neural Approach for Summarization Graph neural networks have demonstrated their ability to capture rich dependencies in documents to be summarized. Wang et al. (2020) use a "heterogeneous graph" with sentence nodes and cooccurring word nodes to capture the sentence dependencies. Jin et al. (2020) use two separate encoders to encode the input sequence with a parsed dependency graph. Cui et al. (2020) use a bipartite graph with a topic model to better capture the inter-sentence relationships. Kwon et al. (2021) capture both intra- and inter-sentence relationships via a nested tree structure. Zhu et al. (2021) use entityrelation information from the knowledge graph to increase the factual consistency in summaries. + +Our approach is related to the structural attention model (Balachandran et al., 2021; Liu et al., 2019a), but differs in two major ways: (i) we introduce an adaptive sparse matrix-tree construction to + +learn a latent hierarchical graph and a novel propagation rule; (ii) we investigate to use the structure information both with the encoder and the decoder for abstractive summarization, and not just the encoder. These shows to be more effective for unsupervised learning of the latent hierarchical structure while can defeat the approach that leverages external graph constructor (Balachandran et al., 2021). + +# 3 Hierarchy-aware Graph Neural Encoder + +HierGNN learns the document structure in an end-to-end fashion without any direct structure supervision, and does not need an external parser to construct the structure, unlike previous work (Balachandran et al., 2021; Huang et al., 2020b; Wang et al., 2020; Cardenas et al., 2022). In addition, it empirically improves over supervised graph construction, which has been a challenge (Balachandran et al., 2021). + +Sequential summarizers encode an $N$ -token article, $X = (x_{1}, \dots, x_{N})$ as $d$ -dimensional latent vectors using an encoding function $\mathbf{h}_{enc}(x_t) \in \mathbb{R}^d$ and then decodes them into the target summary $Y$ . (We denote by $\mathbf{h}_{enc}(X)$ the sequence of $x_t$ encodings for $t \leq N$ .) Our model includes four modules in addition to this architecture: 1) a sparse matrix-tree computation for inferring the document hierarchical structure, ii) a novel message-passing layer to identify inter-sentence dependencies, iii) a reasoning fusion layer aggregating the outputs of the message-passing module; and vi) a graph-selection attention module to leverage the encoded structural information. + +# 3.1 Learning the Latent Hierarchical Structure + +We first introduce our latent structure learning algorithm that makes use of a sparse variant of the matrix-tree theorem (Tutte, 1986; Koo et al., 2007). + +Latent Document Hierarchical Graph. We represent the document as a complete weighted graph, with each node representing a sentence. The edge weights are defined as the marginal probability of a directional dependency between two sentences. In addition, each sentence node has an extra probability value, the "root probability" which indicates the hierarchical role of the sentence, such as the roles of the lead, most important facts, or other information defined based on the inverted pyramid model for news articles (Pottker, 2003; Ytreberg, 2001). + +Intuitively, a sentence with a high root probability (high hierarchical position) conveys more general information; namely, it is a connector, while a sentence with a lower root probability (information node) carries details supporting its higher connectors. The underlying graph structure is latent and not fixed, summed out in our overall probability model using the matrix-tree theorem. + +Sparse Matrix-Tree Computation. For an article with $M$ sentences, we start from the sentence embeddings as the node initialization $H^{(0)} = [\mathbf{s}_1,\dots,\mathbf{s}_i,\dots,\mathbf{s}_M]$ . We then use two independent non-linear transformations to obtain a pair of parent and child representation for each sentence, + +$$ +\begin{array}{l} \mathbf {s} _ {i} ^ {(p)} = \sigma (W _ {p} \mathbf {s} _ {i} + b _ {p}), \\ \mathbf {s} _ {i} ^ {(c)} = \sigma (W _ {c} \mathbf {s} _ {i} + b _ {c}), \\ \end{array} +$$ + +where $W_{p}, W_{c}, b_{p}, b_{c}$ are parameters, $\sigma$ is the ReLU activation function (Dahl et al., 2013). + +The standard use of the matrix-tree theorem (Tutte, 1986) computation (MTC; Smith and Smith 2007; Koo et al. 2007; McDonald and Satta 2007) includes the exponential function to calculate a matrix $F \in \mathbb{R}^{M \times M}$ with positive values with each element $f_{ij}$ representing the weight of the directional edge from a node $s_i$ to $s_j$ ; and a positive vector of root scores $\mathbf{f}^{(root)} \in \mathbb{R}^M$ . However, having a dense matrix degrades our graph reasoning module by including irrelevant information from redundant $M$ sentence nodes. Inspired by the work about sparse self-attention (Zhang et al., 2021; Correia et al., 2019), we introduce an adaptive solution to inject sparsity into MTC. We replace the exponential scoring function with the ReLU function $(\mathrm{ReLU}(x \in \mathbb{R}) = \max\{x, 0\}$ and similarly coordinate-wise when $x$ is a vector) and calculate the root $f_i^{(root)}$ and edge scores $f_{ij}$ by a fully-connected layer and a bi-linear attention layer, respectively, + +$$ +\begin{array}{l} f _ {i} ^ {(r o o t)} = \mathrm {R E L U} (W _ {r} \mathbf {s} _ {i} ^ {(p)} + b _ {r}) + \varepsilon , \\ f _ {i j} = \operatorname {R E L U} \left(\mathbf {s} _ {i} ^ {(p) ^ {\top}} W _ {b i} \mathbf {s} _ {j} ^ {(c)}\right) + \varepsilon , \\ \end{array} +$$ + +where $W_{bi}, W_r, b_r$ are learnable. (We use $\varepsilon = 10^{-6}$ to avoid matrix non-invertibility issues.) Compared to the exponential function, ReLU relaxes $F$ and $\mathbf{f}^{(root)}$ to be non-negative, thus being capable of assigning zero probability and pruning dependency edges and roots. We finally plug in these quantities to the standard MTC (Tutte, 1986) + +![](images/810caade56badc4a0a44d989c20414acce1eeb37b9fdd6c299f5b06dcfc6db05.jpg) +Figure 1: Architecture for the sequence-to-sequence model with HierGNN reasoning encoder. + +![](images/972d0183cac9c91c8018a98179a872d1da973dae64514f94ae577022ae8b2681.jpg) + +and marginalize the edge and root probabilities as the adjacency matrix $A(i,j) = P(z_{ij} = 1)$ and root probability $p_i^r$ representing the hierarchical role (i.e., the likelihood to be a connector) of each sentence. + +# 3.2 Reasoning by Hierarchy-aware Message Passing + +We present a novel message-passing mechanism over the learned hierarchical graph. This mechanism realizes the inter-sentence reasoning where connectors can aggregate information from their related information nodes while propagating the information to others. For the $i$ -th sentence node, the edge marginal controls the aggregation from its $K$ information nodes; and the root probability controls the neighbouring information is combined as $i$ -th node's update $\mathbf{u}^{(l)}$ in the $l$ -th reasoning layer, + +$$ +\mathbf {u} _ {i} ^ {(l)} = (1 - p _ {i} ^ {r}) \mathcal {F} _ {r} (\mathbf {s} _ {i} ^ {(l)}) + (p _ {i} ^ {r}) \sum_ {k = 1} ^ {K} A _ {i k} \mathcal {F} _ {n} (\mathbf {s} _ {k} ^ {(l)}), +$$ + +where $\mathcal{F}_r$ and $\mathcal{F}_n$ are parametric functions. Intuitively, if a sentence is a connector, it should have strong connectivity with the related information nodes, and aggregate more details. Each information node learns to either keep the uniqueness of its information or fuse the information from the connectors. To filter out the unnecessary information, we adopt a gated mechanism as the information gatekeeper in the node update, + +$$ +\begin{array}{r} \mathbf {g} _ {i} ^ {(l)} = \sigma (\mathcal {F} _ {g} ([ \mathbf {u} _ {i} ^ {(l)}; \mathbf {h} _ {i} ^ {(l)} ])), \\ \mathbf {h} _ {i} ^ {(l + 1)} = \mathrm {L N} (\mathbf {g} _ {i} ^ {(l)} \odot \phi (\mathbf {u} _ {i} ^ {(l)}) + (\mathbf {1} - \mathbf {g} _ {i} ^ {(l)}) \odot \mathbf {h} _ {i} ^ {(l)}), \end{array} +$$ + +where $\mathcal{F}_g$ is a parametric function and $\odot$ is the element-wise dot product. We use layer normalization (LN) to stabilize the output for the update function. The function $\sigma$ is the sigmoid function, and $\phi$ can be any non-linear function. + +# 3.3 Reasoning Fusion Layer + +We construct reasoning chains that consist of $L$ hops by stacking $L$ HierGNN blocks together. To handle cases where fewer than $L$ hops are needed, we add a fusion layer to aggregate the output from each reasoning hop to produce the final output of HierGNN. A residual connection is also introduced to pass the node initialization directly to the output, + +$$ +\mathbf {h} _ {i} ^ {(G)} = (W _ {g} [ \mathbf {h} _ {i} ^ {(1)}, \dots , \mathbf {h} _ {i} ^ {(L)} ] + b _ {g}) + \mathbf {h} _ {i} ^ {(0)}, +$$ + +where $W_{g}, b_{g}$ are learnabale parameters. We use two approaches for layer use: (a) Layer-Shared Reasoning (LSR): we construct a shared reasoning graph first, followed by $L$ message passing layers for reasoning; (b) Layer-Independent Reasoning (LIR): we learn the layer-wise latent hierarchical graphs independently, where each message passing layer uses its own graph. + +# 3.4 Graph-selection Attention Mechanism + +In addition to token-level decoding attention, we propose a graph-selection attention mechanism (GSA) to inform the decoder with learned hierarchical information, while realizing the sentence-level content selection. In each decoding step $t$ , our decoder first obtains a graph context vector, $\mathbf{c}_G^t$ , which entails the global information of the latent hierarchical graph. We first compute the graph-level + +attention distribution $\mathbf{a}_G^t$ by, + +$$ +e _ {v _ {i}} ^ {t} = \operatorname {A T T N} ^ {(G)} (\mathbf {h} ^ {(L)}, \mathbf {z} _ {t}), +$$ + +$$ +\mathbf {a} _ {G} ^ {t} = \mathrm {S O F T M A X} (\mathbf {e} ^ {t}), +$$ + +where $\mathrm{ATTN}^{(G)}$ is a graph attention function. The vectors $\mathbf{h}_i^{(L)}\in \mathbb{R}^d,\mathbf{z}_t\in \mathbb{R}^d$ are the $L$ -th layer node embeddings for sentence $i$ and decoding state at time $t$ , respectively. The graph context vector $\mathbf{c}_G^t\in \mathbb{R}^d$ is finally obtained by summing all $\mathbf{h}_i^{(L)}$ weighted by $\mathbf{a}_G^t$ . The value of $\mathbf{c}_G^t$ is used as an additional input for computing token-level attention, + +$$ +e _ {i} ^ {t} = \mathrm {A T T N} ^ {(T)} (\mathbf {h} _ {e n c} (X), \mathbf {z} _ {t}, \mathbf {c} _ {G} ^ {t}), +$$ + +$$ +\mathbf {a} _ {T} ^ {t} = \operatorname {S O F T M A X} \left(\mathbf {e} ^ {t}\right), +$$ + +where $\mathrm{ATTN}^{(T)}$ is a token-level attention function (Luong et al., 2015; Vaswani et al., 2017). Again, the token-attentional context vector $\mathbf{c}_f^t$ is computed by summing the encoder outputs weighted by $\mathbf{a}_T^t$ . The final context vector $\mathbf{c}_f^t$ is fused from the graph $\mathbf{c}_G^t$ and token context vectors $\mathbf{c}_T^t$ with a parametric function $g_{f}$ , $\mathbf{c}_f^t = g_f(\mathbf{c}_G^t,\mathbf{c}_T^t)$ . + +# 4 Experimental Setting + +Benchmarks. We evaluate our model on two common document summarization benchmarks. The first is the CNN/Daily Mail dataset (Hermann et al., 2015) in the news domain, with an average input of 45.7 sentences and 766.1 words, and a reference with an average length of 3.59 sentences and 58.2 words. We use the non-anonymized version of See et al. (2017), which has 287,084/13,367/11,490 instances for training, validation and testing. The second dataset we use is XSum (Narayan et al., 2018), a more abstractive benchmark consisting of one-sentence human-written summaries for BBC news. The average lengths for input and reference are 23.26 sentences with 430.2 words and 1 sentence with 23.3 words, respectively. We follow the standard split of Narayan et al. (2018) for training, validation and testing (203,028/11,273/11,332). + +Implementations. We experiment with the non-pretrained PGN of See et al. (2017) and the pretrained BART model (Lewis et al., 2020). The implementation details are in Appendix A. + +Baselines. We compare HierGNN with three types of baselines: 1) the base models for developing HierGNN; and 2) several strong non-pretrained and pretrained baselines; 3) abstractive summarizers boosted with the hierarchical information. + +
Non-pretrainedR-1R-2R-LBS
LEAD-340.3417.7036.57-
PGN39.5317.2836.38-
StructSum ES39.6316.9836.72-
StructSum LS39.5216.9436.71-
StructSum (LS + ES)39.6217.0036.9521.70
PGN - Ours39.0716.9735.8723.74
HierGNN-PGN (LSR)39.8717.7736.8525.64
HierGNN-PGN (LIR)39.3417.3936.4425.26
PretrainedR-1R-2R-LBS
BERTSUMABS41.7219.3938.7629.05
BERTSUMEXTABS42.1319.6039.1828.72
T5-Large42.5020.6839.75-
BART44.1621.2840.90-
Hie-BART44.3521.3741.05-
HAT-BART44.4821.3141.52-
BART - Ours44.6221.4941.3433.98
BART + SentTrans.44.4421.4441.2733.90
HierGNN-BART (LSR)44.9321.741.7134.43
HierGNN-BART (LIR)45.0421.8241.8234.59
+ +Table 2: Automatic evaluation results in ROUGE scores, BERTScore (BS) on CNN/DM. The top and bottom blocks show the comparison for non-pre-training and pre-training models separately. We use **bold** to mark the best abstractive model. + +We compare HierGNN-PGN with the non-pretrained baselines. We first include the LEAD-3 (Nallapati et al., 2017) that simply selects the top three sentences in the article as the summary. StructSum (Balachandran et al., 2021) is a PGN-based model, which incorporates structure information by an explicit attention mechanism (ES Attn) on a coreference graph and implicit attention mechanism (IS Attn) on an end-to-end learned document structure. StructSum ES+IS Attn uses both implicit and explicit structures. + +We compare HierGNN-PGN with the pretrained baselines. BERTSumAbs and BERTSumExtAbs are two abstractive models by Liu and Lapata (2019) based on the BERT encoder. We also incorporate a strong multitask sequence generation model, T5-Large. Hie-BART (Akiyama et al., 2021) enhances BART by jointly modeling the sentence and token-level information in the self-attention layer. HAT-BART (Rohde et al., 2021) appends a sentential Transformer block on top of BART's encoder to model the sentence-level dependencies. We also develop a baseline, BART+SentTrans., replacing our MTC block with a Transformer block. This baseline uses a comparable number of parameters to our HierGNN. We aim to verify the advantage of modeling the document's hierarchical information by MTC over just + +
Non-pretrainedR-1R-2R-LBS
LEAD-316.301.6011.95-
Seq2Seq (LSTM)28.428.7722.48-
Pointer-Generator29.709.2123.2423.16
PGN + Coverage28.108.0221.72-
HierGNN-PGN (LSR)30.1410.2124.3227.24
HierGNN-PGN (LIR)30.2410.4324.2027.36
PretrainedR-1R-2R-LBS
BERTSUMABS38.7616.3331.1537.60
BERTSUMEXTABS38.8116.5031.2738.14
T5 (Large)40.917.333.0-
BART45.1422.2737.25-
HAT-BART45.9222.7937.84-
BART - Ours44.9721.6836.4752.89
BART + SentTrans.45.1221.6236.4652.95
HierGNN-BART (LSR)45.1921.7136.5952.94
HierGNN-BART (LIR)45.3921.8936.8153.15
+ +Table 3: Automatic evaluation results in ROUGE scores, BERTScore (BS) on XSum. All of our HierGNN-PGN models are trained without a coverage mechanism. We use **bold** for the best model. + +
ModelRel.Inf.Red.Overall
BERTSUMABS*-0.43*-0.33-0.11*-0.29
T50.08-0.090.050.01
BART0.150.24-0.040.12
HierGNN-BART0.200.190.090.16
+ +increasing the model size. + +# 5 Results + +Automatic Evaluation. We evaluate the quality of summaries through ROUGE F-1 scores (Lin and Och, 2004) by counting the unigram (R-1), bigram (R-2) and longest common subsequence (R-L) overlaps. To avoid the use of pure lexical overlap evaluation (Huang et al., 2020a), we also use BERTScore (Zhang et al., 2020). + +We summarize the results for non-pretrained and pretrained models on CNN/DM and XSum in the upper and bottom block of Table 2 and Table 3, respectively. Our HierGNN module improves the performance over the PGN and BART + +Table 4: Results for the human evaluation based on i) Relevance (Rel.), ii) Informativeness (Inf.), and iii) Redundancy (Red.). * indicates statistically significant improvements over the baselines with our model (*: by pair-wise t-test with $p < 0.05$ , corrected using Benjamini-Hochberg method to control the False Discovery Rate (Benjamini and Hochberg, 1995) for multiple comparison). We bold the best results in each criteria and the overall evaluation. Detailed results are given in Appendix C. + +
R-1R-2R-LBS
Full Model30.2410.4324.2027.36
w/o HierGNN Module-0.54-1.22-0.96-4.20
w/o Graph-select (GSA)-0.41-0.41-0.17-0.27
w/o Sparse MTC-0.14-0.25+0.05-0.41
w/o Graph Fusion-0.94-0.81-0.77-1.39
+ +Table 5: Ablation study of each modules in our HierGNN-PGN (LIR) model on XSum. + +
ModelCoverage (↗)Copy Length (↘)
Reference20.27 %5.10
Pointer-Generator11.78 %18.82
Ours w/o Graph Select Attn.13.74 %18.88
Ours w/ Graph Select Attn.15.22 %16.80
+ +Table 6: Results of average copying length of sequences and coverage of the source sentences for the CNN/DM datasets. Arrows $(\nearrow$ or $\searrow$ ) indicate that larger or lower scores are better, respectively. + +for both CNN/DM and XSum, demonstrating the effectiveness of our reasoning encoder for the non-pretrained and pretrained summarizers. Secondly, the best model of HierGNN-PGN achieves higher scores than StructSum ES and ES+IS that explicitly construct the document-level graph representation using an external parser in pre-processing. This indicates our learned hierarchical structure can be effective and beneficial for downstream summarization without any supervision. HierGNN-BART also outperforms Hie-BART, HAT-BART and BART+SentTrans., which indicates that the MTC encoder's inductive bias is effective in modeling useful structure. + +Human Evaluations. We also invited human referees from Amazon Mechanical Turk to assess our model and additional three pure abstractive baselines including BERTSUMABS, T5-Large, BART on CNN/DM testing set. Our assessment focuses on three criteria: i) Relevance (Whether the conveyed information in the candidate summary is relevant to the article?), ii) Informativeness (How accurate and faithful information does the candidate summary convey?), and iii) Redundancy (Whether the sentences in each candidate summary are non-redundant with each other?). The detailed settings for human evaluation are presented in Appendix B. We ask the referees to choose the best and worst summaries from the four candidates for each criterion. The overall scores in Table 4 are computed as the fraction of times a summary was chosen as best minus the fraction it was selected as + +
R-1R-2BS
BART49.4121.7019.12
HierGNN-BART49.6221.7420.32
+ +![](images/69098e8f38777d1b961009e4588afc3b189b997aa102a6316a8c890064f47405.jpg) +Figure 2: Performance gap on PubMed between HierGNN-BART with BART when summarizing articles truncated at different lengths. The gap between HierGNN and BART consistently increases with input length. + +worst. The results show that our HierGNN-BART achieves the overall best performance. Moreover, while BART has a slightly better informativeness score, HierGNN-BART produces better summaries in terms of Relevance and Redundancy. + +Ablations. We conduct an ablation study (in Table 5) of the HierGNN encoder, graph-selection attention, sparse MTC and graph fusion layer. The ablation is done on our HierGNN-PGN LIR model trained on XSum. The ablation in HierGNN reasoning module significantly degrades the model, which suggests the positive contribution of the functionality in across-sentence reasoning. The scores without GSA also confirm the guidance of graph-level information is beneficial. By removing the graph fusion layer, we again observe the performance decreases, which proves the benefits of fusing the neighbor feature from multiple hopping distances. Finally, the results also confirm the superiority of the sparse MTC over the dense MTC for learning effective hierarchical structure for summarization. + +# 6 Discussion + +Coverage and Copy Length. We report two metrics introduced by See et al. (2017) in Table 6. The coverage rate measures how much information in the source article is covered by the summary, while the average copy length indicates to what extent that summarizer directly copies tokens from the + +Table 7: Summarization performance on PubMed. We test BART and HierGNN-BART with the same hyperparameters settings. + +
CNN/DMComp.2-hop3-hop4-hop
Reference63.0332.084.590.31
BART79.5217.812.430.24
HierGNN-BART78.13(↓)19.29(↑)2.36(↓)0.21(↓)
XSumComp.2-hop3-hop4-hop
Reference34.8742.5018.793.83
BART28.4742.5123.055.98
HierGNN-BART27.27(↓)42.53(↑)24.31(↑)5.89(↓)
+ +Table 8: Percentages of summary sentences are synthesized by compression (information is extracted from a single source sentence) and fusion (information is combined from two or more source sentences). We use $\downarrow$ and $\uparrow$ to mark the changes between BART and HierGNN. + +source article as its output. The higher coverage rate achieved by our HierGNN indicates that it can produce summaries with much richer information in the source article. Balachandran et al. find that PGN tends to over-copy content from the source article thus degenerating into an extractive model, particularly with more extractive datasets such as CNN/DM. We find that the graph-selection attention significantly reduces the average copy length, indicating that it informs the decoder to stop copying by leveraging the learned structural information in the encoder and that it reduces the reliance on PGN's copying functionality (See et al., 2017). We show a qualitative example for the graph-selection attention outcome in Appendix D. + +In Tables 2 and 3, we observe that the layer-shared reasoning (LSR) architecture for HierGNN-PGN on CNN/DM outperforms the layer-independent reasoning (LIR) architecture, with the opposite being true for XSum. We attribute this difference to the inductive bias of the base model and the essential difference between the CNN/DM and XSum datasets. PGN-based models tend to copy and degenerate the model into an extractive summarizer (Balachandran et al., 2021). With a more extractive dataset like CNN/DM, a complex reasoning procedure for the PGN-based model may not be necessary; instead, learning a single hierarchical structure and selecting the sentences to be copied accordingly is sufficient. However, XSum summaries are abstractive, and the dataset emphasizes combining information from multiple document sites (see discussion by Narayan et al. 2019). LIR then shows its advantage by learning separate hierarchical structure in each layer. For an abstractive + +![](images/20efdab128782547f3754d3cc9580b5006771ef4cf0295a8007aab2559a1439b.jpg) +Figure 3: Layer-wise intra-layer diversity (top) and inter-layer diversity (bottom) for BART with 2-layer HierGNN equipped with Sparse and Dense MTC. + +base model (BART), LIR consistently outperforms LSR on both CNN/DM and XSum. + +Compression or Fusion? To assess whether sentence fusion happens often, we quantify the ratio of sentence compression and sentence fusion that the model uses to generate summaries in Table 8 (Lebanoff et al., 2019). In comparison to BART, HierGNN reduces the proportion of sentence compression in both CNN/DM and XSum. Furthermore, the summarization models tend to adopt sentence compression more than exists in human-written references for CNN/DM, while more sentence fusion is used for XSum. This observation reveals that mechanism learned by end-to-end for neural summarizers to produce summaries is different than that humans use. Human editors can flexibly switch between compression and fusion; the summarization models tend to adopt one of them to produce the output. + +Effectiveness for Longer Sequence. The performance of sequence-to-sequence models decays as the length of the input sequence increases (Liu et al., 2018) because they do not capture long-range dependencies. We hypothesize that HierGNN has a better capability in capturing such dependencies via its learned document hierarchical structure, thus enhancing the performance for long-sequence inputs. To verify this, we further conduct experiments on PubMed (Cohan et al., 2018), a long-document + +Top-3 Sentences with Highest Root Probabilities Our Sparse MTC : 8th Sent. 9.77 : A lunar eclipse happens when the sun, Earth and moon form a straight line in space, with the Earth smack in the middle. 6th Sent. 9.40 : The sun shines on the Earth and creates a shadow. 10th Sent. 7.79 : Parts of South America, India, China and Russia also will be able to see the eclipse, but it won't be visible in Greenland, Iceland, Europe, Africa or the Middle East. + +Top-3 Sentences with Lowest Root Probabilities Our Sparse MTC : 20th Sent. Sparsi ed : Share your photos with CNN iReport. +18th Sent. Sparsi ed : If you want to learn more about the eclipse, NASA astronomer Mitzi Adams will take questions on Twitter NASA Marshall. +19th Sent. 0.02 : Did you see the total lunar eclipse? + +Reference: The total eclipse will only last 4 minutes and 43 seconds. People west of the Mississippi River will have the best view. Parts of South America, India, China and Russia also will see the eclipse. + +Ours: A total lunar eclipse started at 3:16 a.m. Pacific Daylight Time. People west of the Mississippi River will have the best view. Parts of South America, India, China and Russia also will be able to see the eclipse. The total eclipse will only last four minutes and 43 seconds. + +![](images/375403f6729dfc42e6ce29fbad00e525057a6b26d320c610d3f6235b638c7826.jpg) +Figure 4: Top: the top-3 sentences with highest/lowest root probabilities, reference and summaries in article 23 in CNN/DM testing split. We underline the relevant contents; Bottom: visualizations for our sparse (Left) and the dense (Right) MTC layer for HierGNN-BART. + +![](images/059d741fa48ba3f68baf5504d4b4ed0d8190797ad7ad6bf75f06f64b12f2909c.jpg) + +summulation dataset with scientific articles in the medical domain. We summarize the performance in Table 7. We notice that HierGNN improves BART by a large margin. We further evaluate the advantages of HierGNN over vanilla BART with respect to inputs of various lengths. As shown in Figure 2, when the input is longer than 1.6K tokens, HierGNN has a positive advantage over BART. As the input length increases, the advantage of HierGNN consistently becomes larger. + +Sparse MTC or Dense MTC? We also study the expressive ability of our adaptive sparse variant of the matrix tree computation. We design two quantitative metrics: 1) Intra-layer diversity measures the diversity for the marginal distributions of roots and edges in each MTC layer, which is calculated by the range of the probability distribution; 2) Inter-layer diversity measures the diversity for the marginal distributions of roots and edges between MTC layers, which is calculated by the average Jensen-Shannon (JS) Divergence between the marginal distributions of roots and edges in different layers (Zhang et al., 2021; Correia et al., + +2019). We compare both intra-layer and inter-layer diversity for our adaptively sparse MTC and the original dense MTC (Koo et al., 2007; Liu et al., 2019a; Balachandran et al., 2021). + +Figure 3 shows that our sparse variant of MTC has a higher diversity in both intra- (Top) and interlayer (Bottom) metrics for CNN/DM and XSum, indicating that our sparse MTC has a more powerful expressive ability than dense MTC. We find that the sparsity of HierGNN is different across layers and datasets: 1) $99.66\%$ of HierGNN's predictions for XSum instances have at least one element that is sparsified to zero, while this proportion is $24.22\%$ for CNN/DM; 2) Almost all the sparsified elements in HierGNN's predictions for XSum are edges, while roots for CNN/DM; 3) $90.32\%$ of the elements of the edge distribution in the second MTC layer are sparsified in XSum, but no any sparsified element in the first layer. In CNN/DM, the proportion of sparsified elements in the first and second layer are almost identical. These observations reveal that sparse MTC can adaptively choose whether sparse out elements in root or edge distributions, thus boosting the richness of the structural information represented by MTC. + +We finally show a qualitative case with three sentences per article, having the highest or lowest root probabilities (see Figure 4), and the heatmap visualization of the learned hierarchical structures from sparse and dense MTC. We observe that the highest-probability root sentences tend to be summary-worthy while also scattering in different positions of the article, and the lowest probability is irrelevant. The structure learned by Sparse MTC tends to be more diverse and can successfully sparsify out the sentence nodes with irrelevant contents, e.g., 18th and 20th sentence. + +# 7 Conclusion + +We propose HierGNN that can be used in tandem with existing generation models. The module learns the document hierarchical structure while being able to integrate information from different parts of the text as a form of reasoning. Our experiments verify that HierGNN is effective in improving the plain sequential summarization models. + +# Limitations + +The inductive bias of our HierGNN model has an assumption that the source article follows an "inverted pyramid" style of writing. This may pose + +limitations in the generalization of our model to other categories of input documents with no or a weak hierarchical structure. Future work includes understanding the limitations of HierGNN in different input domains (e.g., conversation summarization). Additionally, as other large-scale pretrained neural summarizers, our approach with an additional HierGNN encoder increases model complexity. To train our BART-based system, GPUs with at least 32GB of memory are required. Future work may focus on distilling the large HierGNN model into a much smaller size while retaining its original performance. + +# Ethical and Other Considerations + +Human evaluations. Human workers were informed of the intended use of the provided assessments of summary quality and complied with the terms and conditions of the experiment, as specified by Amazon Mechanical Turk. In regards to payment, workers were compensated fairly with the wage of £9 hourly (higher than the maximum minimum wage in the United Kingdom) i.e. £4.50 per HIT at 2 HITs per hour. + +Computing time. We first report the computing time for our most computationally intense HierGNN-BART (471 million parameters) using NVIDIA Tesla A100 with 40G RAM: with CNN/DM, the training takes around 81 GPU hours, and the inference takes 9.39 GPU hours. With XSum, the training takes around 32 GPU hours, and the inference takes 4.41 GPU hours. + +Additionally, training of HierGNN-PGN (32 million parameters) on CNN/DM takes 0.79 seconds per iteration using 1 NVIDIA V100 GPU card with 16GB. We estimate the inference time is 4.02 documents per second. + +# Acknowledgements + +We thank Zheng Zhao, Marcio Fonseca and the anonymous reviewers for their valuable comments. The human evaluation was funded by a grant from the Scottish Informatics and Computer Science Alliance (SICSA). This work was supported by computational resources provided by the EPCC Cirrus service (University of Edinburgh) and the Baskerville service (University of Birmingham). + +# References + +Kazuki Akiyama, Akihiro Tamura, and Takashi Ninomiya. 2021. Hie-BART: Document summarization with hierarchical BART. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop, pages 159-165, Online. Association for Computational Linguistics. +Vidhisha Balachandran, Artidoro Pagnoni, Jay Yoon Lee, Dheeraj Rajagopal, Jaime Carbonell, and Yulia Tsvetkov. 2021. StructSum: Summarization via structured representations. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2575-2585, Online. Association for Computational Linguistics. +Yoav Benjamini and Yosef Hochberg. 1995. Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal statistical society: series B (Methodological), 57(1):289-300. +Meng Cao, Yue Dong, and Jackie Cheung. 2022. Hallucinated but factual! inspecting the factuality of hallucinations in abstractive summarization. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3340-3354, Dublin, Ireland. Association for Computational Linguistics. +Ronald Cardenas, Matthias Galle, and Shay B Cohen. 2022. On the trade-off between redundancy and local coherence in summarization. ArXiv preprint, abs/2205.10192. +Arman Cohan, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Seokhwan Kim, Walter Chang, and Nazli Goharian. 2018. A discourse-aware attention model for abstractive summarization of long documents. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 615-621, New Orleans, Louisiana. Association for Computational Linguistics. +Gonçalo M. Correia, Vlad Niculae, and André F. T. Martins. 2019. Adaptively sparse transformers. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2174-2184, Hong Kong, China. Association for Computational Linguistics. +Peng Cui, Le Hu, and Yuanchao Liu. 2020. Enhancing extractive text summarization with topic-aware graph neural networks. In Proceedings of the 28th International Conference on Computational Linguistics, pages 5360-5371, Barcelona, Spain (Online). International Committee on Computational Linguistics. + +George E Dahl, Tara N Sainath, and Geoffrey E Hinton. 2013. Improving deep neural networks for lvcsr using rectified linear units and dropout. In 2013 IEEE international conference on acoustics, speech and signal processing, pages 8609-8613. IEEE. +Zi-Yi Dou, Pengfei Liu, Hiroaki Hayashi, Zhengbao Jiang, and Graham Neubig. 2021. GSum: A general framework for guided neural abstractive summarization. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4830-4842, Online. Association for Computational Linguistics. +Marcio Fonseca, Yftah Ziser, and Shay B. Cohen. 2022. Factorizing content and budget decisions in abstractive summarization of long documents by sampling summary views. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). +Susan R Goldman, Arthur C Graesser, and Paul van den Broek. 1999. Narrative comprehension, causality, and coherence: Essays in honor of Tom Trabasso. Routledge. +Arthur C Graesser, Murray Singer, and Tom Trabasso. 1994. Constructing inferences during narrative text comprehension. Psychological review, 101(3):371. +Hardy Hardy and Andreas Vlachos. 2018. Guided neural language generation for abstractive summarization using Abstract Meaning Representation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 768-773, Brussels, Belgium. Association for Computational Linguistics. +Karl Moritz Hermann, Tomás Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 1693-1701. +Dandan Huang, Leyang Cui, Sen Yang, Guangsheng Bao, Kun Wang, Jun Xie, and Yue Zhang. 2020a. What have we achieved on text summarization? In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 446-469, Online. Association for Computational Linguistics. +Luyang Huang, Lingfei Wu, and Lu Wang. 2020b. Knowledge graph-augmented abstractive summarization with semantic-driven cloze reward. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5094-5107, Online. Association for Computational Linguistics. +Hanqi Jin, Tianming Wang, and Xiaojun Wan. 2020. Semsum: Semantic dependency guided neural abstractive summarization. In The Thirty-Fourth AAAI + +Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 8026-8033. AAAI Press. +Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746-1751, Doha, Qatar. Association for Computational Linguistics. +Terry Koo, Amir Globerson, Xavier Carreras, and Michael Collins. 2007. Structured prediction models via the matrix-tree theorem. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 141-150, Prague, Czech Republic. Association for Computational Linguistics. +Jingun Kwon, Naoki Kobayashi, Hidetaka Kamigaito, and Manabu Okumura. 2021. Considering nested tree structure in sentence extractive summarization with pre-trained transformer. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 4039-4044, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. +Logan Lebanoff, Kaiqiang Song, Franck Dernoncourt, Doo Soon Kim, Seokhwan Kim, Walter Chang, and Fei Liu. 2019. Scoring sentence singletons and pairs for abstractive summarization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2175-2189, Florence, Italy. Association for Computational Linguistics. +Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics. +Chin-Yew Lin and Franz Josef Och. 2004. Automatic evaluation of machine translation quality using longest common subsequence and skip-bigram statistics. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04), pages 605–612, Barcelona, Spain. +Peter J. Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. 2018. Generating wikipedia by summarizing long sequences. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net. + +Yang Liu and Mirella Lapata. 2019. Text summarization with pretrained encoders. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3730-3740, Hong Kong, China. Association for Computational Linguistics. +Yang Liu, Ivan Titov, and Mirella Lapata. 2019a. Single document summarization as tree induction. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1745-1755, Minneapolis, Minnesota. Association for Computational Linguistics. +Zhengyuan Liu, Angela Ng, Sheldon Lee, Ai Ti Aw, and Nancy F. Chen. 2019b. Topic-aware pointer-generator networks for summarizing spoken conversations. In 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pages 814-821. +Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412-1421, Lisbon, Portugal. Association for Computational Linguistics. +Ryan McDonald and Giorgio Satta. 2007. On the complexity of non-projective data-driven dependency parsing. In Proceedings of the Tenth International Conference on Parsing Technologies, pages 121-132, Prague, Czech Republic. Association for Computational Linguistics. +Afonso Mendes, Shashi Narayan, Sebastião Miranda, Zita Marinho, Andre F. T. Martins, and Shay B. Cohen. 2019. Jointly extracting and compressing documents with summary state representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3955-3966, Minneapolis, Minnesota. Association for Computational Linguistics. +Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. Summarrunner: A recurrent neural network based sequence model for extractive summarization of documents. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA, pages 3075-3081. AAAI Press. +Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1797-1807, Brussels, Belgium. Association for Computational Linguistics. + +Shashi Narayan, Shay B Cohen, and Mirella Lapata. 2019. What is this article about? extreme summarization with topic-aware convolutional neural networks. Journal of Artificial Intelligence Research, 66:243-278. +Jonathan Pilault, Raymond Li, Sandeep Subramanian, and Chris Pal. 2020. On extractive and abstractive neural document summarization with transformer language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9308-9319, Online. Association for Computational Linguistics. +Horst Pottker. 2003. News and its communicative quality: the inverted pyramid—when and why did it appear? Journalism Studies, 4(4):501-511. +Tobias Rohde, Xiaoxia Wu, and Yinhan Liu. 2021. Hierarchical learning for generation with long source sequences. ArXiv preprint, abs/2104.07545. +Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 379-389, Lisbon, Portugal. Association for Computational Linguistics. +Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer-generator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073-1083, Vancouver, Canada. Association for Computational Linguistics. +David A. Smith and Noah A. Smith. 2007. Probabilistic models of nonprojective dependency trees. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-ConNLL), pages 132-140, Prague, Czech Republic. Association for Computational Linguistics. +Kaiqiang Song, Lin Zhao, and Fei Liu. 2018. Structureinfused copy mechanisms for abstractive summarization. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1717-1729, Santa Fe, New Mexico, USA. Association for Computational Linguistics. +Simeng Sun, Ori Shapira, Ido Dagan, and Ani Nenkova. 2019. How to compare summarizers without target length? pitfalls, solutions and re-examination of the neural summarization literature. In Proceedings of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation, pages 21-29, Minneapolis, Minnesota. Association for Computational Linguistics. +W. T. Tutte. 1986. Graph theory, by w. t. tutte, encyclopedia of mathematics and its applications, volume 21, Addison-wesley publishing company, menlo park, ca., 1984, 333 pp. price: 45.00. Networks, 16:107-108. + +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998-6008. +Danqing Wang, Pengfei Liu, Yining Zheng, Xipeng Qiu, and Xuanjing Huang. 2020. Heterogeneous graph neural networks for extractive document summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6209-6219, Online. Association for Computational Linguistics. +Haonan Wang, Yang Gao, Yu Bai, Mirella Lapata, and Heyan Huang. 2021. Exploring explainable selection to control abstractive summarization. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 13933-13941. +Wenhao Wu, Wei Li, Xinyan Xiao, Jiachen Liu, Ziqiang Cao, Sujian Li, Hua Wu, and Haifeng Wang. 2021. BASS: Boosting abstractive summarization with unified semantic graph. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6052-6067, Online. Association for Computational Linguistics. +Yuexiang Xie, Fei Sun, Yang Deng, Yaliang Li, and Bolin Ding. 2021. Factual consistency evaluation for text summarization via counterfactual estimation. In *Findings of the Association for Computational Linguistics: EMNLP* 2021, pages 100–110, Punta Cana, Dominican Republic. Association for Computational Linguistics. +Espen Ytreberg. 2001. Moving out of the inverted pyramid: narratives and descriptions in television news. Journalism Studies, 2(3):357-371. +Biao Zhang, Ivan Titov, and Rico Sennrich. 2021. Sparse attention with linear units. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6507-6520, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. +Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with BERT. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. +Chunting Zhou, Chonglin Sun, Zhiyuan Liu, and Francis Lau. 2015. A c-lstm neural network for text classification. ArXiv preprint, abs/1511.08630. +Chenguang Zhu, William Hinthorn, Ruochen Xu, Qingkai Zeng, Michael Zeng, Xuedong Huang, and Meng Jiang. 2021. Enhancing factual consistency + +of abstractive summarization. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 718-733, Online. Association for Computational Linguistics. + +# A Implementation Details + +HierGNN-PGN is developed based on the PointerGenerator Network (See et al., 2017). To obtain the sentence representations, we use a CNN-LSTM encoder to capture both the $n$ -gram features and sequential features (Kim, 2014; Zhou et al., 2015). The CNN's filter windows sizes are set to be $\{1,2,3,4,5,7,9\}$ with 50 feature maps each. We set the dimension of the representations to be 512. The number of reasoning layers $L$ is set to 3 after a development set search in $\{1,2,3,5,10\}$ . Other settings follow the best hyperparameters for CNN/DM as in (See et al., 2017), and we use 60K iterations to train the coverage mechanism. For XSum, we discard the coverage training due to its redundancy for extreme summarization (Narayan et al., 2018), and we use a beam of size 6. We search the best model by the validation ROUGE scores on both datasets with one search trial per hyperparameter. + +
#LayerVal. PPL (↑)R-1 (↗)R-2 (↗)R-L (↗)
18.6130.0610.0924.23
28.5829.9410.0024.13
38.5130.2410.4324.20
58.5430.1410.2324.32
108.6129.999.9324.13
+ +Table 9: Performance of HierGNN-PGN (LIR) on XSum with respect to the number of reasoning layers. $(\nearrow)$ and $(\searrow)$ indicates the larger and lower is better, respectively. + +HierGNN-BART uses the pretrained architecture BART (Lewis et al., 2020). We use the same approach to obtain the sentence representation as in (Akiyama et al., 2021). On top of the sentence encoder, we add a two-layer HierGNN to boost the sentence representations. The GSA for HierGNN-BART is implemented as the cross-attention in Transformer decoder, which first attends to the output of the reasoning encoder then the token encoder. For both CNN/DM and XSum, we follow the same fine-tuning settings as in (Lewis et al., 2020) except that we use 40K and 20K training steps for each dataset. We search the best model by the label smoothed cross entropy loss on validation set with one search trial per hyperparameter. + +Evaluation Metrics. We use the implementation for ROUGE (Lin and Och, 2004) from + +Google Research. We use the official implementation for BERTScore (Zhang et al., 2020). BERTScore is used with model setting in roberta-large_L17_noidf_version=0.3.9 as suggested. + +Datasets. We describe all our pre-processings for the used datasets as followed, + +- CNN/DM: For HierGNN-PGN, we directly use the data processed by See et al. For HierGNN-BART, we remain all the preprocessing steps to be the same as Lewis et al. +- XSum: Following Lewis et al., we do not preprocess the XSum dataset, and use the original version in (Narayan et al., 2018).10 +- PubMed: We use the same pre-processing script in https://github.com/HHousen/A rXiv-PubMed-Sum. We remove the instances with article have less 3 sentences or abstract have less 2 sentences. We also remove three special tokens: newlines, $< S >$ and $$ . + +# B Details for Human Evaluation + +We adopt several settings to control the quality of human evaluation: 1) we only use data instances whose length difference between candidate summaries does not exceed 35 tokens (Sun et al., 2019; Wu et al., 2021). 2) When publishing the tasks on MTurk, we require all referees to be professional English speakers located in one of the following countries: i) Australia, ii) Canada, iii) Ireland, iv) New Zealand, v) the United Kingdom and vi) the United States, with the HIT Approval Rate and number of HITs Approved to be greater than $98\%$ and 1,000. 3) We evaluate 25 instances in CNN/DM testing set in total, while each task is evaluated by three workers on MTurk. These settings give us the results with an inter agreement in the average of $58.96\%$ , $64.92\%$ and $51.52\%$ for Relevance, Informativeness and Redundancy, separately. + +# C Detailed Results for Human Evaluation + +We show the detailed proportions for each choice in human evaluation in Table 10. + +
Rel.Best(↗)Worst(↘)Score(↗)
HierGNN-BART0.400.200.20
BART0.290.150.14
T5-Large0.250.170.08
BERTSUMABS0.040.48*-0.44
Inf.Best(↗)Worst(↘)Score(↗)
HierGNN-BART0.350.160.19
BART0.430.190.24
T5-Large0.170.27-0.09
BERTSUMABS0.050.39*-0.34
Red.Best(↗)Worst(↘)Score(↗)
HierGNN-BART0.310.210.10
BART0.210.25-0.04
T5-Large0.310.250.06
BERTSUMABS0.170.28-0.11
+ +Table 10: Detailed summary for the human evaluation in terms of Relevance (Rel.), Informativeness (Inf.) and Redundancy (Red.). We show the proportion of each option to be selected as the Best/Worst among the four candidates. $(\nearrow)$ and $(\searrow)$ indicates the larger is better and lower is better, respectively. $*$ : HierGNN-BART's scores are significantly (by pair-wise t-test with $p < 0.05$ , corrected using Benjamini-Hochberg method to control the False Discovery Rate (Benjamini and Hochberg, 1995) for multiple comparison) better than the corresponding system. + +# D Qualitative Case for Graph-Selection Attention + +To demonstrate the effectiveness of the graph-selection attention (GSA) on HierGNN, we visualize the graph-selection attention and compare the token attentions whether graph-selection attention is used (See Figure 5). It turns out graph-selection attention mostly focuses on the top sentences but still captures the critical information in the latter. In this case, graph-selection attention successfully captures fifth title in Miami and Andy Murray from the middle part of the article during decoding (marked in blue). In contrast, the model without graph-selection attention continuously produces content about the event Novak Djokovic beat John Isner (marked in red). + +# Article 4384: + +Two hours before the Miami open semifinal, Novak Djokovic practiced his returns in an empty stadium, the ball coming at him quickly because his hitting partner stood three feet inside the baseline to emulate big-serving John Isner. The drill helped. Djokovic achieved a breakthrough service break against Isner and won Friday night, 7-6 (3), 6-2. 'He's probably the best server we have in the game,' Djokovic said. (2 sentences are abbreviated here) Novak Djokovic beat John Isner in straight sets to reach the final of the Miami Open on Friday night. (4 sentences are abbreviated here) The No. 1-seeded Djokovic closed to within one win of his fifth Key Biscayne title. His opponent Sunday will be two-time champion andy Murray, who defeated Tomas Berdych 6-4, 6-4. (6 sentences are abbreviated here) Djokovic is aiming to win his fifth title in Miami and will take on Scotsman Murray in Sunday's Final. (3 sentences are abbreviated here) + +# Summaries: + +# Reference: + +Novak Djokovic beat John Isner 7-6. The world No. 1 will take on Andy Murray in Sunday's Final. Djokovic is bidding to win his fifth title at Key Biscayne. + +# HierGNN-PGN LIR w/ GSA: + +Novak Djokovic beat John Isner in straight sets to reach the Miami Open. The No.1-seeded Djokovic closed to within one win of his fifth Key Biscayne title. Djokovic will be two-time champion andy Murray, who defeated Tomas Berdych 6-4. + +# HierGNN-PGN LIR w/o GSA: + +Novak Djokovic beat John Isner in straight sets to reach the final of the Miami Open on Friday night. Djokovic achieved a breakthrough service break against Isner and won Friday night, 7-6 (3), 6-2. His opponent Andy Murray defeated Tomas Berdych 6-4, 6-4. + +![](images/7b4d7c7b8770ff72a4ac0c7148fdd2365dcdea1e276e0f92a5e406f741915528.jpg) +Figure 5: Top Table: CNN/DM testing article 4384 and produced summaries; Bottom Figure: visualization for GSA (left) and HierGNN LIR's token-level attention w/ GSA (right-bottom), and HierGNN-PGN LIR w/o GSA (right-top). X-axis, Y-axis are the encoding and decoding steps, respectively. + +![](images/e9bd36c2094372a60fce12720eebd003a5126272a1338bc9b83346ca8f98408e.jpg) + +![](images/097b9324a1893075609623ea49a0bcbd3c04c9d84fd3e74b9ba316ce878b8f34.jpg) \ No newline at end of file diff --git a/abstractivesummarizationguidedbylatenthierarchicaldocumentstructure/images.zip b/abstractivesummarizationguidedbylatenthierarchicaldocumentstructure/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..4a9bef38782c6b6a878691824dbdf078b56bc62b --- /dev/null +++ b/abstractivesummarizationguidedbylatenthierarchicaldocumentstructure/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:27c184b390feb9c4c41fbf457824d2a2ed7963279f50462e89195e95ec9e953c +size 560441 diff --git a/abstractivesummarizationguidedbylatenthierarchicaldocumentstructure/layout.json b/abstractivesummarizationguidedbylatenthierarchicaldocumentstructure/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..f48272d5ec256e8e125af3ad25ad40dfb158fa13 --- /dev/null +++ b/abstractivesummarizationguidedbylatenthierarchicaldocumentstructure/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b5371d4ca7051e5e66e5c15dc905b96f9a3127c47f21284f10881ef7529704e4 +size 467505 diff --git a/abstractvisualreasoningwithtangramshapes/be53ce45-a171-4498-a723-e5dd579ce31c_content_list.json b/abstractvisualreasoningwithtangramshapes/be53ce45-a171-4498-a723-e5dd579ce31c_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..fbc520d2d6263fcf5e0bcaee82d845e8a87ff02f --- /dev/null +++ b/abstractvisualreasoningwithtangramshapes/be53ce45-a171-4498-a723-e5dd579ce31c_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fb9c7145b382da2583708cb94f3d13bbad9301ee49329e12af0d8d235370f293 +size 103450 diff --git a/abstractvisualreasoningwithtangramshapes/be53ce45-a171-4498-a723-e5dd579ce31c_model.json b/abstractvisualreasoningwithtangramshapes/be53ce45-a171-4498-a723-e5dd579ce31c_model.json new file mode 100644 index 0000000000000000000000000000000000000000..6d812106ea9f982934b399e4fd1a90ce1ff6b948 --- /dev/null +++ b/abstractvisualreasoningwithtangramshapes/be53ce45-a171-4498-a723-e5dd579ce31c_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4693b794f1a314b55d146c65b9d374021de459cb30e4f433fa0d9baf1ca72775 +size 127106 diff --git a/abstractvisualreasoningwithtangramshapes/be53ce45-a171-4498-a723-e5dd579ce31c_origin.pdf b/abstractvisualreasoningwithtangramshapes/be53ce45-a171-4498-a723-e5dd579ce31c_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..61ef758a37837fefd8510a6a3554f56394a7fb04 --- /dev/null +++ b/abstractvisualreasoningwithtangramshapes/be53ce45-a171-4498-a723-e5dd579ce31c_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d9288647e5310ae1e16cb0ffa78333ec8d9d5933ed0a07893a9dda2cca9d729f +size 5439855 diff --git a/abstractvisualreasoningwithtangramshapes/full.md b/abstractvisualreasoningwithtangramshapes/full.md new file mode 100644 index 0000000000000000000000000000000000000000..4167ff4b722368169eba82fac195bc10598aa818 --- /dev/null +++ b/abstractvisualreasoningwithtangramshapes/full.md @@ -0,0 +1,394 @@ +# Abstract Visual Reasoning with Tangram Shapes + +Anya Ji $^{1}$ , Noriyuki Kojima $^{1*}$ , Noah Rush $^{1*}$ , Alane Suhr $^{1,3*}$ , Wai Keen Vong $^{2}$ , Robert D. Hawkins $^{4}$ , and Yoav Artzi $^{1}$ + +$^{1}$ Cornell University $^{2}$ New York University $^{3}$ Allen Institute for AI $^{4}$ Princeton University + +{aj592, nk654}@cornell.edu noahjrush@gmail.com + +waikeen.vong@nyu.edu suhr@cs.cornell.edu + +rdhawkins@princeton.edu yoav@cs.cornell.edu + +# Abstract + +We introduce KILOGRAM, a resource for studying abstract visual reasoning in humans and machines. Drawing on the history of tangram puzzles as stimuli in cognitive science, we build a richly annotated dataset that, with $>1\mathrm{k}$ distinct stimuli, is orders of magnitude larger and more diverse than prior resources. It is both visually and linguistically richer, moving beyond whole shape descriptions to include segmentation maps and part labels. We use this resource to evaluate the abstract visual reasoning capacities of recent multi-modal models. We observe that pre-trained weights demonstrate limited abstract reasoning, which dramatically improves with fine-tuning. We also observe that explicitly describing parts aids abstract reasoning for both humans and models, especially when jointly encoding the linguistic and visual inputs. + +# 1 Introduction + +Reference is a core function of natural language that relies on shared conventions and visual concepts. For example, in English, a speaker may use the term dog to refer to a particular animal of the species canis familiaris, or, through abstraction, to an object with a less strongly conventionalized name, such as the shape at the top of Figure 1. A speaker might refer to such a shape as looking like a dog, and even point to its parts, like its head and tail, despite having few visual features in common with the ordinary referent. + +Comprehension and generation of references are critical for systems to engage in natural language interaction, and have been studied extensively with focus on ordinary references (e.g., Viethen and Dale, 2008; Mitchell et al., 2010; Fitzgerald et al., 2013; Mao et al., 2016; Yu et al., 2016), in contrast to the visual abstraction illustrated in Figure 1. + +![](images/7fbaaa88f06db4f812fa47e6038c668632825eab655409fba5c33ccf37dd2b70.jpg) + +![](images/d66f89601838d0d3ef5ae175ab227bc1bdda752ca78b3dd41aa9caf9612c44fe.jpg) + +![](images/c5dd46626b5dcd3c408aa4332861c4e1db5c5e15ee8e8670a7b295eabd2c2f68.jpg) +Figure 1: Two example tangrams, each with two different annotations. Each annotation includes a whole-shape description (bold), segmentation to parts (in color), and naming of parts (linked to each part). The top example shows low variability with near-perfect agreement, while the bottom shows high variability with divergence of language and segmentation. + +![](images/1964b967ce9d860e66804cb06789330a10299afe81de928816a29ea9a1e84e05.jpg) + +We address this gap by adopting an influential paradigm for probing human coordination in the cognitive science literature: reference games with abstract tangram shapes (e.g. Clark and Wilkes-Gibbs, 1986; Fox Tree, 1999; Hawkins et al., 2020). + +Unlike photographs of natural objects, where there is often a single canonical label, tangrams are fundamentally ambiguous. While some shapes fall under strong existing conventions and elicit consensus about appropriate names (e.g., Figure 1, top), others are characterized by weaker conventions (e.g., Figure 1, bottom) and every speaker may arrive at a distinct but valid description (Zettersten and Lupyan, 2020; Hupet et al., 1991). While such diversity is a key consideration motivating their use as stimuli, existing behavioral studies have typically been limited to a relatively small set of 10-20 shapes, highly restricting the overall diversity of the stimulus class. It also limits their applicability for training and analyzing vision and language models, where significantly more data is necessary. + +In this paper, we significantly expand this resource. We introduce KILOGRAM, $^{1}$ a large collec + +tion of tangrams with rich language annotations. KILOGRAM dramatically improves on existing resources along two dimensions. First, we curate and digitize 1,016 shapes, creating a set that is two orders of magnitude larger than collections used in existing work. This set dramatically increases coverage over the full range of naming variability, providing a more comprehensive view of human naming behavior. Second, rather than treating each tangram as a single whole shape, our images are vector graphics constructed from the original component puzzle pieces. This decomposition enables reasoning about both whole shapes and their parts. + +We use this new collection of digitized tangram shapes to collect a large dataset of textual descriptions, reflecting a high diversity of naming behaviors. While existing work has focused on naming the complete shape, we also ask participants to segment and name semantically meaningful parts. We use crowdsourcing to scale our annotation process, collecting multiple annotations for each shape, thereby representing the distribution of annotations it elicits, rather than a single sample. In total, we collect 13,404 annotations, each describing a complete object and its segmented parts. + +The potential of KILOGRAM is broad. For example, it enables the data-driven scaling of studies of human interactions and models of whole-part reasoning in language and vision models. In this paper, we use KILOGRAM to evaluate the visual reasoning capacities of recent pre-trained multi-modal models, focusing on generalizing concepts to abstract shapes. We observe limited generalization of this type in pre-trained models, but significant improvements following fine-tuning with our data. We also see how explicitly referring to and visualizing parts can help reference resolution. Data and code, as well as a data viewer are available at: https://lil.nlp.cornell.edu/kilogram/. + +# 2 Background and Related Work + +Abstract or ambiguous visual stimuli have been widely used to investigate how human partners coordinate when talking about things in the absence of strong naming conventions going back to Krauss and Weinheimer (1964). Tangrams as stimuli were introduced by Clark and Wilkes-Gibbs (1986). These shapes are all built from the same seven primitives, but elicit a wide range of figurative descriptions that conceptualize shapes in different ways (Schober and Clark, 1989; Hor + +ton and Gerrig, 2002; Duff et al., 2006; Holler and Wilkin, 2011; Horton and Slaten, 2012; Ibarra and Tanenhaus, 2016; Shore et al., 2018; Atkinson et al., 2019; Castillo et al., 2019; Bangerter et al., 2020). It has been observed that some shapes are easier or harder to describe (Hupet et al., 1991; Zettersten and Lupyan, 2020; Brashears and Minda, 2020), a property known as nameability or codability, which has also been studied with non-tangram shapes (e.g., line drawings; Snodgrass and Vanderwart, 1980; Cycowicz et al., 1997; Dunabeitia et al., 2018). Even though diversity is a key consideration in working with tangrams, existing stimuli sets are relatively small, limiting their usefulness as NLP benchmarks, where scale is critical. Even the largest studies of variability in naming (e.g., Murfitt and McAllister, 2001) have used a relatively small set of 60 tangrams. Fasquel et al. (2022) present a resource that is related and complementary to ours, including 332 PNG-formatted tangrams with whole-shape naming annotations in French. + +Contemporary pre-trained vision and language approaches can be categorized along an axis characterizing how they encode the data, from jointly encoding the two inputs (Lu et al., 2019; Chen et al., 2020; Kim et al., 2021) to encoding them separately (Radford et al., 2021; Jia et al., 2021). Joint encoding aims to capture tighter interaction between the input modalities compared to separate encoding, but is generally more computationally expensive, and can only operate on multi-modal input. We study recent models on both ends: ViLT (Kim et al., 2021) for joint encoding and CLIP (Radford et al., 2021) for separate encoding. + +These models are typically evaluated on image captioning (e.g., Chen et al., 2015) or visual question answering (e.g., Antol et al., 2015) benchmarks. Several benchmarks, such as NLVR (Suhr et al., 2017, 2019) and Winoground (Thrush et al., 2022), aim for more focused evaluations with a focus on compositionality. We build on these efforts, but target generalization through abstraction using visually ambiguous stimuli. This is inspired by the role of abstraction in human cognition. Abstraction is a key step in human perception (Biederman, 1987) that is critical for generalization (Gentner and Markman, 1997; Medin et al., 1993; Shepard, 1987), and forms the shared foundation on which human language communication is layered (Lupyan and Winter, 2018; McCarthy et al., 2021; Wong et al., 2022). Our focus on part de + +![](images/d04fa42d37fd5974f9417db426718d954602366b78bd52c4b6710e21aad774a5.jpg) +Figure 2: The two phases of our annotation task. + +composition is aligned with how part identification plays an important role in human abstraction (Tversky and Hemenway, 1984). + +# 3 Data Collection + +We scan a large set of tangram puzzles to vector graphics, and crowdsourced annotations of natural language descriptions and part segmentations. + +# 3.1 Collecting Tangram Puzzles + +Tangram puzzles are made of seven primitive shapes (Elffers, 1977), which can be combined in a large variety of configurations evoking different concepts. We scan 1,004 tangrams depicting a broad set of concepts to vector graphic SVGs from Slocum (2003). Appendix A.1 shows example tangrams, Appendix A.2 details on our process. We also manually add 12 tangrams commonly used in previous studies (Hawkins et al., 2020). + +# 3.2 Whole-Part Annotation + +We design a two-stage crowdsourcing task to elicit natural language English descriptions for each tan-gram, both of the whole shape and of its parts (Figure 2). First, in the whole-shape description stage, the worker is shown a tangram image in grayscale and asked to complete the prompt "This shape, as a whole, looks like _____. In the part annotation stage, the worker is asked to select one or more puzzle pieces, and complete the prompt "The part(s) you selected look(s) like _____. These pieces are then colored and the annotation appears in the corresponding color. The annotator can delete annotations, annotate a part as UNKNOWN when they are not sure about its semantics, and add pieces to existing + +
Mean Description Length
Whole-shape description2.28±1.62
Part description1.31±0.77
Vocabulary Size
Whole-shape description3,031
Part description3,110
Overall4,522
Part Segmentation
Mean parts per shape3.63±1.28
Mean pieces per part1.93±1.20
+ +Table 1: Data statistics for the complete dataset. + +parts. All pieces must be annotated to submit the task, yielding a complete segmentation map. + +We use Amazon Mechanical Turk for data collection. Workers are required to be located in the United States with at least a $98\%$ HIT acceptance rate, must pass a qualification task, and complete a survey about their language proficiency (see Appendix A.3 for further details). To prevent a small group of workers from dominating the data, each annotator is only allowed to annotate each tangram once, and cannot annotate more than 200 distinct tangrams. Workers are paid 0.14 USD per task. $^{3}$ + +We first collect 10,053 annotations for the 1,004 scanned tangrams, at least 10 annotations for each tangram (mean=10.01). Following this stage of annotation, we collect additional annotations for a subset of the tangrams to create a set with denser language and part segmentation annotation. We sample 62 tangrams to be representative of the different levels of diversity in annotations we observe in the initially collected data. Appendix A.4 describes the sampling procedure. We also add the 12 tangrams from previous studies for a total of 74 tangrams for dense annotation. We conduct additional annotation tasks to have at least 50 annotations for each of the 74 tangrams selected for dense annotation (mean=53.66). The dense annotation gives us a better estimate of the distribution of language for the 74 selected tangrams, for example to use as reference texts in generation tasks. + +In total, we collect 13,404 annotations for 1,016 tangrams at a total cost of 2,172.94 USD. We lowercase and stem to compute vocabulary size, and tokenize on white spaces to compute description length. Table 1 shows basic data statistics. A total + +
FULLDENSEDENSE10
SND0.91 ±0.110.93±0.060.90±0.15
PND0.76±0.190.79±0.150.73±0.20
PSA5.30±0.625.09±0.535.34±0.77
+ +Table 2: Mean and standard deviation of our analysis measures on the three sets. + +of 297 MTurk workers participate in the annotation, with $98.0\%$ of the workers speaking English as their first language. Those who do not speak English as their first language still rate their English proficiency level as native or close to native. $1.0\%$ of the workers speak more than one language, among which the most common are Spanish, German, Japanese, and Chinese. + +# 3.3 Standard Data Splits + +We split the dataset for analysis and learning experiments. For analysis, we create two overlapping sets: FULL and DENSE. FULL includes 1,016 tangrams, each with 10-11 annotations (mean=10.11). It includes the 10,053 annotations initially collected for the scanned 1,004 tangrams. For the 12 commonly used tangrams, we sample 10 annotations from the later collection effort. DENSE includes all annotations for the 74 densely annotated tangrams, with at least 50, and 53.66 on average annotations per tangram. We also define the set DENSE10 to include only the annotations from the sparse set for the densely annotated tangrams. For learning experiments, we split according to tangrams to create training (692 tangrams), development (125), test (125), and test-dense sets (74). All densely annotated tangrams are in test-dense. The other three sets are split randomly. + +# 4 Data Analysis + +The language and concepts annotators use reflect varying degrees of consensus around conventions for describing the appearance of shapes and their parts. For analysis, we preprocess the annotations by lowercasing, tokenizing, lemmatizing, and removing stop words using NLTK (Bird, 2004). We use the larger FULL set for our analyses (Section 3.3), unless otherwise noted. + +For a broad overview of the types of concepts evoked, we manually tag 250 randomly sampled annotations: $30.8\%$ use human-like concepts (e.g., dancer), $31.2\%$ animate but non-human concepts (e.g., dog), and $38.0\%$ non-animate concepts (e.g., + +house). We examine how part words differ across whole-shape concepts by extracting head words from whole-shape and part descriptions. Figure 3 shows the distribution of part head words for each of 272 whole-shape head words with $>10$ occurrences, ranked in order of frequency. Figure A.2 in the appendix illustrates how the most common part word head is used in different tangrams. + +A central problem of visual abstraction is the degree of ambiguity or subjectivity that a shape evokes across different people (Murthy et al., 2022): some descriptions have higher consensus than others. We define three measures of variability along different dimensions: shape naming divergence (SND), part naming divergence (PND), and part segmentation agreement (PSA). Table 2 lists the mean and standard deviation for these three measures over the sparsely and densely annotated data. + +Shape Naming Divergence (SND) A tangram's SND quantifies the variability among whole-shape annotations. SND is an operationalization of nameability, a criteria that is commonly used to measure how consistent is naming of an object across individuals (e.g., Zettersten and Lupyan, 2020). + +Formally, a whole-shape annotation is a sequence of $M$ tokens $\bar{x} = \langle x_1,\ldots ,x_M\rangle$ . Given a tangram with $N$ annotations $\bar{x}^{(j)}, j = 1,\dots ,N$ , each of length $M^{(j)}$ , we define $w_{i}^{(j)}$ for each token $x_{i}^{(j)}$ in annotation $\bar{x}^{(j)}$ as the proportion of other annotations of that tangram that do not contain $x_{i}^{(j)}$ : + +$$ +w _ {i} ^ {(j)} = \frac {1}{N - 1} \sum_ {j ^ {\prime} = 1} ^ {N} \mathbb {1} \left[ x _ {i} ^ {(j)} \notin \bar {x} ^ {j ^ {\prime}} \right], \tag {1} +$$ + +where $\mathbb{1}$ is an indicator function. The divergence of annotation $\bar{x}^{(j)}$ is $W^{(j)} = \frac{1}{M^{(j)}}\sum_{j=0}^{k}w_{i}^{(j)}$ . The divergence of a tangram is $W = \frac{1}{N}\sum_{j=0}^{N}W^{(j)}$ . For example, the SNDs of the tangrams in Figure 1 computed only with the two annotations displayed are 0.00 (top) and 1.00 (bottom). + +Mean SND is relatively high in our data, with 0.91 on FULL (Table 2). We observe relatively similar values for DENSE and DENSE10, albeit with lower standard deviation for DENSE, as expected with more annotations. Annotators often use words that are unique to their annotation. We observe perfect consensus for only one tangram, and mostly similar annotations with relatively few deviations for a few others. Figure 5 shows several examples. + +Part Naming Divergence (PND) SND measures annotation divergence for part name annotations + +![](images/bafe2198b7234382c63f4d8daa28a88dded72e7c409fdb559d38948fd3ee9a86.jpg) +Figure 3: Part distributions for different head words. Whole-shape head words (shown in descending order of frequency from left) elicit a variety of part head word distributions. Colors are randomly assigned to part head words, but are fixed across all bars. Grey indicates part head words with $< 0.005$ frequency. + +![](images/df8e77362aae68466a00934e75397350ceca6228d9e37511e481d7ec6bf95e1c.jpg) +Figure 4: Per tangram SND, PND, and PSA mean values and $95\%$ confidence interval. Tangrams are ordered along the $x$ -axis in ascending order according to the plotted measure. Values are calculated by bootstrapping with 1,000 resamplings. In the FULL plots, the 74 densely annotated tangrams are colored red. + +collected in the second step of the annotation task. PND is computed identically to SND, but with the concatenation of all part names of an annotation as the input text $\bar{x}$ . For example, the PNDs of the two tangrams in Figure 1 computed with only the two annotations displayed are 0.19 (top) and 1.00 (bottom). In general, part descriptions are more similar than whole-shape descriptions with mean PND of 0.76 (Table 2). + +Part Segmentation Agreement (PSA) Annotators segment the tangrams into parts by grouping the tangram puzzle pieces. PSA quantifies the agreement between part segmentations as the maximum number of pieces that does not need to be moved to another group in order to edit one segmentation to another. We compute PSA as a linear + +sum assignment problem with maximum weight matching. For each pair of segmentations, we create a cost matrix, where the number of rows is the number of parts in one annotation and the number of columns is the number of parts in the second annotation. The value of each matrix element is the number of matching puzzle pieces between the two corresponding parts in the two annotations. The tangram PSA is the mean of costs for all annotation pairs. For example, the PSAs of the two tangrams in Figure 1 computed with only the two annotations displayed are 6.00 (top) and 3.00 (bottom). + +The mean PSA in our data is 5.30 (Table 2), with an approximately normal distribution of values. Some tangrams have strong segmentation cues, such that annotators reach perfect consensus, while others elicit significant segmentation disagreement. + +Dense Annotations The comparison of FULL, DENSE, and DENSE10 illustrates how well our data approximates the real distribution of annotations for each tangram, and the advantage of DENSE. Figure 4 shows the complete distribution of values. Comparing DENSE10 and DENSE, the rankings of the tangrams are largely the same with the additional annotations: for SND, Spearman's rank correlation coefficient is $r(72) = .78$ , $p \ll .001$ ; for PND, $r(72) = .87$ , $p \ll .001$ ; for PSA, $r(72) = .76$ , $p \ll .001$ . The tangrams sampled for DENSE represent well the distribution of tangrams along the different measures, as illustrated by the red highlights in Figure 4. + +Inter-measure Correlations Figure 5 illustrates the correlations between the three measures. The divergences of the two types of language annotations, whole-shape and part descriptions, show moderate positive correlation $r(1014) = .531$ , $p \ll .001$ . This indicates that tangrams that are annotated with similar whole-shape descriptions are often annotated with similar part descriptions. + +![](images/1386baeebaa657f26ce5b1bd2e5c32cb863632a4fd20f2bd8157a135ea7d47b7.jpg) + +![](images/c68f86b911117cc232bb042e82a4964af0a00881d7d08a7ee59b1e4b507e3cc1.jpg) +Figure 5: SND, PND, and PSA correlations computed over the FULL set. Representative examples of different SND and PSA values are illustrated on the right. Densely annotated examples are highlighted in red. + +![](images/d430305a58ed8b5905e152d7445526f4eacb655a7cafe9732e561af9d38070be.jpg) + +Nevertheless, many tangrams with similar whole shape descriptions have diverse part descriptions. The correlations between language annotation divergence and PSA are lower, $r(1014) = -.216$ , $p \ll .001$ for SND and PSA and $r(1014) = -.165$ , $p \ll .001$ for PND and PSA. + +# 5 Visual Reasoning with Tangrams + +We use KILOGRAM to evaluate the reasoning of CLIP (Radford et al., 2021) and ViLT (Kim et al., 2021) through a reference game task, where the model is given a textual description and selects the corresponding image from a set of images. Formally, given a textual description $\bar{x}$ and a set of $k$ images $\mathcal{I} = \{I_1,\dots,I_k\}$ , the task is to select the image $I_{i}\in \mathcal{I}$ corresponding to $\bar{x}$ . We cast the task as computing a similarity score $f(\bar{x},I_i)$ between the description $\bar{x}$ and an image $I_{i}$ . We select the corresponding image as $I^{*} = \arg \max_{I_{i}\in \mathcal{I}}f(\bar{x},I_{i})$ . + +# 5.1 Reference Game Generation + +We randomly generate reference games for an annotated text-image pair $(\bar{x},I)$ by sampling additional $k - 1$ images from data under several constraints. We do not include repeating images in the set of $k$ images or images that have identical whole-shape text annotations. This avoids obvious ambiguity that is impossible to resolve in the target selection. We also require all images to be annotated with the + +same number of parts. This reduces the chance of the model relying on simple part counting to discriminate between target images when including parts in the text (condition PARTS below). Appendix A.8 shows the impact of these constraints through analyzing experiments not using them. + +# 5.2 Models + +We instantiate $f$ using CLIP or ViLT, two models based on the Transformer architecture (Vaswani et al., 2017). We provide a brief review of the models, and refer the reader to the respective papers for further details. + +CLIP uses two separate encoders to generate separate fixed-dimension representations of the text and images. It uses contrastive pre-training with a symmetric cross entropy loss on a large amount of aligned, but noisy web image-text data. We implement the scoring function $f$ with CLIP by encoding the text $\bar{x}$ and all images $I \in \mathcal{I}$ separately, and then computing the dot-product similarity score of the text with each image. This is identical to the CLIP pre-training objective, which potentially makes CLIP suitable for our task out of the box. + +ViLT uses a single encoder that takes as input both the text and image inputs together. ViLT pre-training also uses aligned image-text data, but from existing benchmarks (Lin et al., 2014; Krishna et al., 2016; Ordonez et al., 2011; Sharma + +Figure 6: Illustration of the language and vision modalities under the different experimental conditions. + +et al., 2018). It is pre-trained using multiple self-supervised objectives, including image-text matching via a binary classification head, which is suitable for our task out of the box. We implement $f$ using this classification head. Given a text $\bar{x}$ and an image $I \in \mathcal{I}$ , we compute their similarity using the matching classification head. + +# 5.3 Experimental Conditions + +We study several input variants. Figure 6 illustrates the modalities under the different conditions, and Appendix A.5 shows complete example inputs. For the textual description $\bar{x}$ , we experiment with including the whole-shape description only (WHOLE) or adding part names (PARTS) by combining with the whole-shape description using the template with , ..., and . This tests the ability of models to benefit from part names. We consider two image $I$ conditions: coloring all parts with the same color (BLACK) or coloring parts differently (COLOR). The color choice in COLOR corresponds to the position of the part name in $\bar{x}$ , when the text includes part names (PARTS). + +We experiment with the original pre-trained model weights, and with contrastive fine-tuning on our data using a symmetric cross entropy loss (Radford et al., 2021). During fine-tuning only, we consider a data augmentation condition (AUG), where we augment the data by creating examples that include only a subset of the part names in the text and coloring only the parts corresponding to the included parts names in the image, while all other parts remain black. We generate partial part examples for all possible subsets of parts for each example. Appendix A.5 illustrates the generated examples. When generating reference games for the augmented data, we constrain all the examples within a reference game to have the same number of parts in their full annotation, otherwise the task could be solved by counting parts. Part names are shuffled when creating the augmented data, and part colors correspond to the sequential position of the part name in the templated text. + +# 5.4 Implementation Details + +We set the size of the reference game context to $k = 10$ throughout our experiments. During contrastive fine-tuning, we create a text-image matching matrix of size $k \times k$ for each generated reference game in our training data by randomly selecting a text description for each tangram distractor from its annotations. We compute matching loss in both directions, from text to images and vice versa. In practice, this is equivalent to creating $2k$ reference games in both directions, and provides more informative updates. For all experiments, we use an ensemble of three models combined by element-wise multiplication of their outputs. Appendix A.7 provides model-specific implementation details. Appendix A.9 provides a reproducibility list. + +# 5.5 Estimating Human Performance + +We conduct an initial estimation of expected human performance on the same evaluation task by recruiting an independent group of 217 human participants. Each participant is randomly assigned to one of the four conditions and shown a random sequence of 20 trials from that condition, preventing leakage across conditions. On each trial, we present an annotation from our development set along with the corresponding context of ten tangrams and ask the participant to click the tangram that was being described. We randomly sample one referential context per annotation, which provides coverage over all 125 tangrams and over 600 unique descriptions in each condition. Before the actual test trials, each participant is provided with a fixed set of 10 practice trials with feedback indicating whether they have selected the correct tangram, and if not, we highlight the correct answer. Performance in the practice trials is not considered in our analysis. Appendix A.6 provides further details. + +# 5.6 Results and Analysis + +Table 3 shows development and test reference game accuracies under different experimental setups, including for human studies. Figure 7 shows the accuracy distribution for human participants. + +
ConditionCLIPViLTHuman
PTFTPTFT
Development Results
WHOLE+BLACK16.143.312.940.947.7
PARTS+BLACK16.445.312.545.749.1
WHOLE+COLOR15.940.811.741.049.5
PARTS+COLOR15.045.410.775.263.0
PARTS+COLOR+AUG-47.6-72.2
Held-out Test Results
WHOLE+BLACK17.942.513.144.5
PARTS+BLACK18.645.813.350.3
WHOLE+COLOR18.141.412.844.8
PARTS+COLOR17.046.511.777.3
PARTS+COLOR+AUG-50.2-74.4
+ +Table 3: Reference game accuracies (\%) for the different experimental conditions with pre-trained (PT) or fine-tuned (FT) models, as well as for human subjects. + +While both models perform better than a random baseline (10%) out of the box, we generally observe poor performance with the pre-trained weights (PT). CLIP slightly outperforms ViLT throughout, potentially because it is trained with a contrastive objective similar to a reference game. Whereas ViLT's matching loss is aligned with our goal, it is only one of several losses in its objective. We observe no reliable improvement from adding part information, either textual or visual. The low performance on WHOLE+BLACK indicates the models fail to generalize familiar concepts to abstract shapes and the lack of consistent improvement with part information indicates an inability to reason about the correspondence of text and colored parts. + +Fine-tuning (FT) dramatically improves performance for both models. Adding part names to the text description improves both models (PARTS+BLACK). However, segmentation information in the form of part coloring without part names (WHOLE+COLOR) shows no benefit. Although ViLT does not benefit from color information alone, the combination with part names (PARTS+COLOR) shows significant added improvement in performance over having access to part information in one of the modalities. Overall, we observe small consistent differences in performance between the two models, except when having access to both part names and colors (PARTS+COLOR), which ViLT effectively uses following fine tuning. This may be because ViLT's tight integration of the modalities in its single encoder allows it to take advantage of the part correspondence information provided + +![](images/b03a957823fc06bb99b5c82045aa3ba4d91bfdee67a0808fe6a5d837bd786c78.jpg) +Figure 7: The distribution of each human participant's mean accuracy in the four conditions. The white dashed lines are the estimated means of a two-component Gaussian mixture model. + +when both part names and colors are given. + +Human performance follows a similar trend to the fine-tuned models: adding part names and segmentation helps performance, and their benefit is most pronounced when both are provided. Human performance is significantly higher than pre-trained (PT) models across all four conditions. Fine-tuning (FT) closes this gap. Indeed, in the PARTS+COLOR condition, ViLT significantly outperforms mean human performance. To better analyze human results, we fit a two-component Gaussian mixture model to the distribution of individual participants' accuracies (Figure 7). We observe two components for all conditions except WHOLE+BLACK, indicating two distinct sub-populations. For example, for PARTS+COLOR, the low-performing subpopulation has a mean accuracy of $52.5\%$ , while the high-performing has a mean of $83.8\%$ , significantly outperforming the fine-tuned ViLT. It is possible that the lower-performance sub-population is not making full use of the additional information. + +Data augmentation (AUG) improves performance for CLIP, but not for ViLT, which even shows a small decrease in performance, although still significantly outperforming CLIP. We hypothesize that the presence of training examples with partial part information complicates resolving the correspondence between parts and their name, resulting in overall lower ViLT performance. We leave further study of this hypothesis for future work. + +The augmentation condition fine-tunes the models to handle examples with partial part information, and allows to study the impact of gradually + +![](images/26ff3527537aa417e9cf019986532417e4e0972080ddd6e3ab68cea999ef0be1.jpg) +Figure 8: Mean probability assigned to the correct image using fine-tuned CLIP (left) or fine-tuned ViLT (right) on the development set, by number of parts included in text and colored in the images. Curves are separated by total number of parts in the annotation of the target example. Error bands are bootstrapped $95\%$ confidence intervals. + +![](images/afd3a417c2015652161feaf9d9261c3391bf36eab4870c588ae28a6644ad6e5a.jpg) + +adding part information. We apply the augmentation process to the development data to generate the data for this analysis. Figure 8 shows the effect of gradually adding part information on the probability of the correct prediction, separated by the total number of parts in the example. Overall, part information is beneficial, but with a diminishing return as more part information is added. We observe this for both models, but with a much faster rate for CLIP, which overall shows much lower performance. ViLT is able to benefit from increasing part information, with the benefit diminishing only after four parts are provided. + +# 6 Discussion + +KILOGRAM provides a new window into the visual abstraction capacity of grounded language models and their ability to generalize concepts beyond their photographic appearance, an integral component of human concept representations (Fan et al., 2015). Our experiments show that there is significant room to improve pre-trained models, which should be able to perform zero-shot reference game tasks without fine-tuning as well as humans do (Clark and Wilkes-Gibbs, 1986). The improved performance after fine-tuning indicates the multi-modal architecture itself has the potential for higher performance, which current pre-training regimes likely do not support. In particular, ViLT's improved performance as a function of additional part information suggests that more structured concept alignment may play a role in this effort (e.g., between parts expressed as lexical items and the corresponding elements of the image). + +While we focused on the task of reference resolution, KILOGRAM is also well-suited for production tasks (e.g., generating human-like distributions of + +descriptions or coloring named parts on a blank tan-gram) as well as instruction-following tasks (e.g., placing pieces in the described configuration to reconstruct a tangram). More broadly, our data emphasizes the need for maintaining well-calibrated distributions over the many different possible ways that people may conceptualize or talk about things, rather than collapsing to a "best" prediction. + +# 7 Limitations + +Although randomly constructed reference games provide an interpretable evaluation metric, they also pose several limitations. Performance is limited by the fact that descriptions were elicited for isolated images. These descriptions do not reflect the kind of pragmatic reasoning commonly deployed by human speakers in reference games to resolve ambiguities (Goodman and Frank, 2016). In other words, annotators were not able to anticipate the necessary level of detail to disambiguate the object from a specific context of distractors, hence the descriptions may be underinformative. Randomly generated reference games may include ambiguities that make them impossible to solve (e.g., two objects that could both plausibly be described as a bird). The possible performance ceiling on these games is likely below $100\%$ . Extending the data through interactive reference games is an important direction for future work. Likewise, our studies of baseline human performance on this task are preliminary. We found that participants clustered into higher- and lower-performing groups, likely reflecting attentional and motivational factors (e.g., some participants may not have fully attended to the provided part information). A better understanding of human behavior is critical before making any clear conclusions comparing humans and + +model performance. Ultimately, models only outperformed mean human performance significantly only after fine-tuning on approximately 6,600 example reference games. + +Our resource contribution and analysis are focused on English. While the data collection design does not make language-specific assumptions, it depends on the availability of proficient speakers, which is limited in contemporary crowdsourcing services for certain languages. Our large collection of visual stimuli is well suited to extend our data collection to other languages and cultures, which may display different abstractions. This is an important direction for future work. Extending our analysis to other languages depends on the availability of pre-trained models in these languages, which may be limited by the availability of aligned language vision data and the computational resources required for pre-training. + +# Acknowledgements + +This research was supported by ARO W911NF21-1-0106, NSF under grant No. 1750499, and a gift from Open Philanthropy. NK is supported by Masason Fellowship, AS by a Facebook PhD Fellowship and an NSF GRF under grant No. 1650441, and RDH by a CV Starr Fellowship. We thank Rob Goldstone, Judith Fan, Cathy Wong, and the anonymous reviewers for their helpful comments and suggestions. We are grateful for the contributions of the workers on Mechanical Turk. + +# References + +Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. 2015. VQA: Visual question answering. In IEEE International Conference on Computer Vision, pages 2425-2433. +Mark Atkinson, Gregory J Mills, and Kenny Smith. 2019. Social group effects on the emergence of communicative conventions and language complexity. Journal of Language Evolution, 4(1):1-18. +Adrian Bangerter, Eric Mayor, and Dominique Knutsen. 2020. Lexical entrainment without conceptual pacts? revisiting the matching task. Journal of Memory and Language, 114:104129. +Irving Biederman. 1987. Recognition-by-components: a theory of human image understanding. Psychological review, 94 2:115-147. +Steven Bird. 2004. Nltk: The natural language toolkit. ArXiv, cs.CL/0205028. + +G. Bradski. 2000. The OpenCV Library. Dr. Dobb's Journal of Software Tools. +Bailey Brashears and John Paul Minda. 2020. The effects of feature verbalizability on category learning. In Proceedings of the 42nd Conference of the Cognitive Science Society. +Lucía Castillo, Kenny Smith, and Holly P Branigan. 2019. Interaction promotes the adaptation of referential conventions to the communicative context. Cognitive science, 43(8):e12780. +Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dolkar, and C. Lawrence Zitnick. 2015. Microsoft COCO captions: Data collection and evaluation server. CoRR, abs/1504.00325. +Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. 2020. Uniter: Universal image-text representation learning. In European conference on computer vision, pages 104-120. +Herbert H Clark and Deanna Wilkes-Gibbs. 1986. Referring as a collaborative process. Cognition, 22(1):1-39. +Yael M. Cycowicz, D Friedman, M Rothstein, and Joan Gay Snodgrass. 1997. Picture naming by young children: norms for name agreement, familiarity, and visual complexity. Journal of experimental child psychology, 65 2:171-237. +Melissa C Duff, Julie Hengst, Daniel Tranel, and Neal J Cohen. 2006. Development of shared information in communication despite hippocampal amnesia. Nature neuroscience, 9(1):140-146. +Jon Andoni Dunabeitia, Davide Crepaldi, Antje S. Meyer, Boris New, Christos Pliatsikas, Eva Smolka, and Marc Brysbaert. 2018. Multipic: A standardized set of 750 drawings with norms for six European languages. Quarterly Journal of Experimental Psychology, 71:808 - 816. +Joost Elffers. 1977. Tangram: The Ancient Chinese Puzzle. Penguin Books. +Judith E. Fan, Daniel Yamins, and Nicholas B. Turk-Browne. 2015. Common object representations for visual recognition and production. Cognitive Science. +Alicia Fasquel, Angèle Brunellière, and Dominique Knutsen. 2022. A modified procedure for naming 332 pictures and collecting norms: Using tangram pictures in psycholinguistic studies. Behavior research methods. +Nicholas FitzGerald, Yoav Artzi, and Luke Zettlemoyer. 2013. Learning distributions over logical forms for referring expression generation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1914-1925. + +Jean E Fox Tree. 1999. Listening in on monologues and dialogues. Discourse processes, 27(1):35-53. +Dedre Gentner and Arthur B. Markman. 1997. Structure mapping in analogy and similarity. American Psychologist, 52:45-56. +Noah D. Goodman and Michael C. Frank. 2016. Pragmatic language interpretation as probabilistic inference. Trends in Cognitive Sciences, 20:818-829. +Chris Harris, Mike Stephens, et al. 1988. A combined corner and edge detector. In *Alvey vision conference*, volume 15, pages 10-5244. CiteSeer. +Robert D. Hawkins, Michael C. Frank, and Noah D. Goodman. 2020. Characterizing the dynamics of learning in repeated reference games. Cognitive science, 44(6):e12845. +Judith Holler and Katie Wilkin. 2011. Co-speech gesture mimicry in the process of collaborative referring during face-to-face dialogue. Journal of Nonverbal Behavior, 35(2):133-153. +William S Horton and Richard J Gerrig. 2002. Speakers' experiences and audience design: Knowing when and knowing how to adjust utterances to addressees. Journal of Memory and Language, 47(4):589-606. +William S Horton and Daniel G Slater. 2012. Anticipating who will say what: The influence of speaker-specific memory associations on reference resolution. Memory & cognition, 40(1):113-126. +Michel Hupet, Xavier Seron, and Yves Chantraine. 1991. The effects of the codability and discriminability of the referents on the collaborative referring procedure. British Journal of Psychology, 82(4):449-462. +Alyssa Ibarra and Michael K Tanenhaus. 2016. The flexibility of conceptual pacts: Referring expressions dynamically shift to accommodate new conceptualizations. Frontiers in psychology, 7:561. +Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. 2021. Scaling up visual and vision-language representation learning with noisy text supervision. In International Conference on Machine Learning, pages 4904-4916. PMLR. +Wonjae Kim, Bokyung Son, and Ildoo Kim. 2021. Vilt: Vision-and-language transformer without convolution or region supervision. In International Conference on Machine Learning, pages 5583-5594. PMLR. +Robert M Krauss and Sidney Weinheimer. 1964. Changes in reference phrases as a function of frequency of usage in social interaction: A preliminary study. Psychonomic Science, 1(1):113-114. + +Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A. Shamma, Michael S. Bernstein, and Li Fei-Fei. 2016. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International Journal of Computer Vision, 123:32-73. +Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740-755. +Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. Advances in neural information processing systems, 32. +Gary Lupyan and Bodo Winter. 2018. Language is more abstract than you think, or, why aren't languages more iconic? Philosophical Transactions of the Royal Society B: Biological Sciences, 373. +Junhua Mao, Jonathan Huang, Alexander Toshev, Oana Camburu, Alan L. Yuille, and Kevin Murphy. 2016. Generation and comprehension of unambiguous object descriptions. In Proceedings of the Conference on Computer Vision and Pattern Recognition, pages 11-20. IEEE. +William McCarthy, Robert X. D. Hawkins, Haoliang Wang, Cameron Holdaway, and Judith E. Fan. 2021. Learning to communicate about shared procedural abstractions. ArXiv, abs/2107.00077. +Douglas L. Medin, Robert L. Goldstone, and Dedre Gentner. 1993. *Respects for similarity*. Psychological Review, 100:254-278. +Margaret Mitchell, Kees van Deemter, and Ehud Reiter. 2010. Natural reference to objects in a visual domain. In Proceedings of the International Natural Language Generation Conference. +Tara Murfitt and Jan McAllister. 2001. The effect of production variables in monolog and dialog on comprehension by novel listeners. Language and Speech, 44(3):325-350. +Sonia K Murthy, Thomas L Griffiths, and Robert D Hawkins. 2022. Shades of confusion: Lexical uncertainty modulates ad hoc coordination in an interactive communication task. Cognition, 225:105152. +Vicente Ordonez, Girish Kulkarni, and Tamara Berg. 2011. Im2text: Describing images using 1 million captioned photographs. Advances in neural information processing systems, 24. +Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, pages 8748-8763. PMLR. + +Michael F Schober and Herbert H Clark. 1989. Understanding by addressees and overhearers. Cognitive psychology, 21(2):211-232. +Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. 2018. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2556-2565. +Roger N. Shepard. 1987. Toward a universal law of generalization for psychological science. Science, 237 4820:1317-23. +Todd Shore, Theofronia Androulakaki, and Gabriel Skantze. 2018. KTH tangrams: A dataset for research on alignment and conceptual pacts in task-oriented dialogue. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA). +J Slocum. 2003. The Tangram Book: The Story of the Chinese Puzzle with over 2000 Puzzles to Solve. Sterling Publishing, New York. +Joan Gay Snodgrass and Mary Vanderwart. 1980. A standardized set of 260 pictures: norms for name agreement, image agreement, familiarity, and visual complexity. Journal of experimental psychology. Human learning and memory, 6 2:174-215. +Alane Suhr, Mike Lewis, James Yeh, and Yoav Artzi. 2017. A corpus of natural language for visual reasoning. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 217-223. +Alane Suhr, Stephanie Zhou, Ally Zhang, Iris Zhang, Huajun Bai, and Yoav Artzi. 2019. A corpus for reasoning about natural language grounded in photographs. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 6418-6428. +Tristan Thrush, Ryan Jiang, Max Bartolo, Amanpreet Singh, Adina Williams, Douwe Kiela, and Candace Ross. 2022. Winoground: Probing vision and language models for visio-linguistic compositionality. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5238-5248. +Barbara Tversky and Kathleen Hemenway. 1984. Objects, parts, and categories. Journal of Experimental Psychology: General, 113:169-193. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems. + +Jette Viethen and Robert Dale. 2008. The use of spatial relations in referring expression generation. In Proceedings of the International Conference on Natural Language Generation. +Catherine Wong, William McCarthy, Gabriel Grand, Yoni Friedman, Joshua B. Tenenbaum, Jacob Andreas, Robert D. Hawkins, and Judith E. Fan. 2022. Identifying concept libraries from language about object structure. ArXiv, abs/2205.05666. +Licheng Yu, Patrick Poirson, Shan Yang, Alexander C Berg, and Tamara L Berg. 2016. Modeling context in referring expressions. In The European Conference on Computer Vision, pages 69-85. +Martin Zettersten and Gary Lupyan. 2020. Finding categories through words: More nameable features improve category learning. Cognition, 196:104135. + +# A Appendix + +# A.1 Examples from KILOGRAM + +Figure A.1 shows example tangrams from our data. Figure A.2 shows examples of the use of the part name head, the most common part head word in the data. All data can be browsed on the data visualization dashboard: https://lil.nlp.cornell.edu/kilogram/. + +# A.2 Collecting Tangrams + +We scan all the pages of tangram solutions from Slocum (2003) into JPEG files to extract SVG files of individual tangrams. We use heuristics based on edge and corner detection (Harris et al., 1988) to extract individual tangrams into separate files by detecting the four corners of each puzzle and adding padding. We heuristically detect the individual standard pieces in each tangram using corner detection. Because the shapes are standard, we can test if an extracted shape is an expected puzzle's piece and if we obtain the expected number of such shapes. We resize each tangram and all its pieces to a standard size, and label the ID of each puzzle piece consistently across all tangrams. We heuristically and manually validate the outputs, and prune solutions that fail to vectorize properly, for example if the process fails to recover exactly seven pieces. + +# A.3 Crowdsourcing Qualifications and Survey + +The qualifier includes three multiple choice questions aimed to ensure that (a) the annotator describes the abstract shape meaningfully instead of simply describing its geometry; (b) each part description only contains one part (body and arms instead of body with arms); and (c) the part descriptions correspond to the description of the whole shape. We provide a short video tutorial of the task and examples of invalid annotations for workers to view before completing the qualifier. We also collect basic non-identifying demographic data from each worker, including the languages that they speak and their proficiency, if English is their first language, and where they learned English. We retain the correspondence of anonymized hashed worker IDs to the annotations and language information they provide. + +# A.4 Dense Annotation Sampling + +The set DENSE is made of 62 tangrams sampled from FULL and 12 tangrams commonly used in + +prior work. We sample the 62 tangrams from FULL to represent the diversity of tangrams using the first set of annotations we collect. We plot the annotated tangrams by average log perplexity of whole-shape descriptions with $\frac{1}{100}$ smoothing and PSA and apply a $5 \times 5$ grid to the plot (Figure A.3). Using perplexity and PSA allows us to sample a set of tangrams with diverse degrees of annotation and segmentation agreement. With a relatively high smoothing factor, we are able to spread out the data points, because the majority of the data set has high divergence in descriptions. We randomly pick 12 periphery points to collect more annotations for outliers, uniformly sample 25 from all the 1004 tangrams, and randomly sample 25, one from each grid, to represent the entire distribution. + +We calculate average log perplexity of whole shape annotations for each tangram. Let $\bar{x}^{(1)},\ldots ,\bar{x}^{(N)}$ be annotations for a tangram, where each annotation is a sequence of tokens $\bar{x}^{(j)} = \langle x_1,\dots,x_{M^{(j)}}\rangle$ of length $M^{(j)}$ . We create a language model $p^{(j)}$ for every annotation $\bar{x}^{(j)}$ using all other $N - 1$ annotations for the tangram: + +$$ +p ^ {(j)} (x) = \frac {C _ {x \in \bar {x} ^ {(j ^ {\prime} \neq j)}} + k}{\operatorname {t o t a l} _ {j ^ {\prime} \neq j} + k V}, \tag {2} +$$ + +where $C_{x \in \bar{x}^{(j' \neq j)}}$ is the number of occurrences of $x$ in the other annotations for the tangram, $k$ is the smoothing factor, $\text{total}_{j' \neq j}$ is the total number of words used in the other annotations for the tangram and $V$ is the vocabulary size of all whole-shape annotations across all tangrams. The log perplexity for annotation $\bar{x}^{(j)}$ is $\log PP^{(j)} = -\frac{1}{M^{(j)}} \sum_{i=1}^{M^{(j)}} \log_2 p(x_i^{(j)})$ . The log perplexity for the tangram is the average of perplexity values for all its annotations $\log PP = \frac{1}{N} \sum_{j=1}^{N} \log PP^{(j)}$ . We lowercase, stem, and remove stop words before computing the log perplexity. + +# A.5 Example Inputs for Experimental Conditions + +Figure A.4 shows how one annotation, including both text and image, appears under the different experimental conditions. For conditions with PARTS annotations, we generate simple English sentences combining the whole shape description with part descriptions using the template with , , ..., and . We add an indefinite article to each singular part description. BLACK images are tangrams with all pieces colored black with white borders. COLOR images are tangrams with each part colored with one of + +the CSS preset colors in the order of coral, gold, lightskyblue, lightpink, mediumseagreen, darkgrey, lightgrey that correspond to the parts in the annotation. For the augmented condition (AUG), text inputs are whole annotations combined with each possible subset of the part descriptions. Image inputs are tangrams colored in the same way as colored images, but the parts excluded from the subset of part descriptions are colored black instead. All part descriptions in the annotations are randomly shuffled and not consistently associated with any particular color in the images, so that the coloring solely serves as an indication of the ordering of parts in the combined text. + +# A.6 Human Performance Baseline Details + +We recruited an independent group of 233 human participants from the Prolific crowdsourcing platform (https://www.prolific.co/), and asked them to perform the same reference game task we used for model evaluation. Each participant was randomly assigned to one of the four conditions and shown a random sequence of 20 trials from that condition. On each trial, we showed a text annotation from the development set along with the corresponding context of ten tangrams and asked the participant to click the tangram that was being described. The information that was available varied across condition, just as in the model evaluations. The tangrams were either presented to participants in black-and-white (BLACK) or colored according to their segmentation map (COLOR), and the language was either the whole-shape description alone (WHOLE) or with the parts included (PARTS). In the PARTS+COLOR condition, the parts text was colored to match the image to facilitate visual comparison, providing the same alignment information available to the models. + +We took several steps to ensure high-quality responses. First, participants began with a fixed set of 10 practice trials to familiarize with the task. For these practice trials, we provided feedback indicating whether they have selected the correct tangram, and if not, we highlight the correct answer. To assess whether participants were paying attention as opposed to responding randomly, we inserted an unambiguous "catch trial" where the target was the square tangram and the description was square. We excluded 16 participants who failed to select the correct target on this trial, yielding a final sample size of 217 participants out of the 233 recruited. + +Because our aim was to obtain overall accuracy estimates for each condition, we did not require judgements for every individual annotation and context in the test set. However, we were able to ensure good coverage of the dataset, including annotations from all 125 tangrams and over 600 unique descriptions in each condition. + +# A.7 Model-specific Implementation Details + +For experiments with CLIP, we use the ViT-B/32 variant. We fine-tune using an Adam optimizer with learning rate 5e-8 and weight decay 1e-6. At the end of each epoch, the training data is shuffled and rebatched. We train the models up to 200 epochs and use patience of 50 epochs to select the model with the highest image prediction accuracy on a non-augmented validation set taken from the training data. All images are resized to CLIP's default input resolution of $224 \times 224$ , with white padding to make to rectangle images square. The total number of trainable parameters in CLIP is 151.2M. CLIP models are fine-tuned with either a single GeForce RTX 2080 Ti GPU with 11GB memory or a single Titan RTX GPU with 24GB memory. Fine-tuning takes approximately 40 minutes per epoch for augmented setups (AUG) and roughly 3 minutes for other setups. + +For ViLT experiments, we fine-tune with an AdamW optimizer with learning rate 1e-4 and weight decay 1e-2. We use a cosine learning rate schedule with warm-up over the first epoch. We train the models up to 30 epochs with a patience of 10 epochs and follow the same model selection criterion as for CLIP. All images are resized to $384 \times 384$ . The total number of trainable parameters in ViLT is 87.4M. ViLT models are fine-tuned with a single Titan RTX GPU with 24 GB memory. Fine-tuning takes up to 5.5 hours per epoch for augmented setups (AUG) and roughly 15 minutes for other setups. + +# A.8 Random Generation of Reference Games + +In our main experiments (Section 5), we randomly generate reference games subject to constraints (Section 5.1). In particular, we ensure that distractors contained the same total number of parts. We explore the impact of these constraints by repeating our experiments on reference games generated without the constraints. Without the constraints, part counting can help the model disqualify distractors and significantly narrow down the set of likely referents. This is because images with a different + +
ConditionCLIPViLT
PTFTPTFT
WHOLE+BLACK17.346.213.241.3
PARTS+BLACK16.847.412.647.0
WHOLE+COLOR15.94812.446.2
PARTS+COLOR15.971.312.189.0
PARTS+COLOR+AUG-74-86.0
+ +Table A.1: Reference game development accuracies $(\%)$ for the different experimental conditions with pretrained (PT) or fine-tuned (FT) models for games generated without constraints. + +number of parts colored compared to the number of parts in the text description can be easily ignored without considering the semantics of the text or images. Table A.1 shows development accuracies for games generated without constraints, both for training and testing. Generally, the success rate achieved on unconstrained contexts is much higher compared to contexts generated with constraints (Figure 3). However, when analyzing the performance of this model on part-controlled contexts (Figure A.5), we observe roughly similar performance to the games generated with constraints (Figure 8), even though we would expect a significant performance increase given the results in Table A.1. We even observe a more pronounced decrease in performance when more parts are added, illustrating further difficulty generalizing. We conclude that the model trained on games generated without constraints (Table A.1) likely learns to rely on part-counting heuristics and may be less reliable in other settings. + +# A.9 Reproducibility Checklist + +For all reported experimental results: + +- A clear description of the mathematical setting, algorithm, and/or model: yes; see Section 5. +- Submission of a zip file containing source code, with specification of all dependencies, including external libraries, or a link to such resources: yes; attached to our submission. +- Description of computing infrastructure used: yes; see Appendix A.7. +- The average runtime for each model or algorithm (e.g., training, inference, etc.) or estimated energy cost: yes; see Appendix A.7. + +- Number of parameters in each model: yes; see Appendix A.7. +- Corresponding validation performance for each reported test result: yes; see Appendix 3 and Appendix A.1 for results on the development set. +- Explanation of evaluation metrics, with links to code used: yes; see Section 5 for an explanation of the reference game metric. An implementation is included in the attached code zipfile. + +For all experiments with hyperparameter search: + +- We performed a minimal manual search for learning rate and weight decay, and used the same values for all experiments (described in Section A.7). + +For all datasets used: + +- Relevant details such as languages, and number of examples and label distributions: yes; see Section 3. +- Details of train/test/validation splits: yes; see Section 3.3. +- Explanation of any data that were excluded, and all pre-processing steps: yes; see Section 3 and Section A.2. +- A zip file containing data or link to a downloadable version of the data: yes; attached to our submission. +- For new data collected, a complete description of the data collection process, such as instructions to annotators and methods for quality control: yes; see Section 3.2 and Section A.3. + +![](images/34d262b9ced2eaf32c2b364ca78b19a386500459a9bba6d77068af232ab92229.jpg) +Figure A.1: Example tangrams from our dataset. + +![](images/aee07e2c337bbb398f1b5fecb36a10a681a9f0b2a52f01202020e4f402f8905d.jpg) +Figure A.2: Example tangrams containing the part description head. Each example includes a tangram and its whole-shape description. We highlight the segmentation corresponding to head in each tangram. + +![](images/f59535b038bbc23d61220193f786885a793e4eb028a9e0cbb3e45bfb35f0f333.jpg) +Figure A.3: Sampled tangrams for dense annotation collection: 12 purple points picked from the periphery, 25 red points randomly sampled from each grid, and 25 green points uniformly sampled from all points. + +![](images/9213ccedd5f85cee4b0a0ac4e8f6e25b5cd1d9768624c4aabb8f96bf3f43b681.jpg) +Figure A.4: An example of one annotation across the different experimental conditions. The augmentation condition (AUG) creates multiple examples from the same annotation. + +![](images/f314c85a59bce39e6e309c719e9c970cc1ba992ff4cef1e118c8ce3ed90bc526.jpg) + +![](images/5367753929390fd86df728e03b5a0d220796e47d4cc3ef75813ddbf2667c0aaa.jpg) +Figure A.5: Mean development probabilities of predicting the correct image in reference games generated without constraints using fine-tuned CLIP (top) or fine-tuned ViLT (bottom) by number of parts included in text and colored in the images. We separate the curves by the total number of parts in the annotation of the target example. The error bands show the $95\%$ confidence interval of the expected mean at each point by bootstrapping with 1000 resamplings. \ No newline at end of file diff --git a/abstractvisualreasoningwithtangramshapes/images.zip b/abstractvisualreasoningwithtangramshapes/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..36659e1d9961f2d388388ad42c2dead30e67fc6f --- /dev/null +++ b/abstractvisualreasoningwithtangramshapes/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3eb0a4e0b67386dc8cc894a2e55dc321b94b5032fa1aff65f03339daeab94559 +size 758902 diff --git a/abstractvisualreasoningwithtangramshapes/layout.json b/abstractvisualreasoningwithtangramshapes/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..e64ca5221a3f78ade3c8b25d706267584bc8e786 --- /dev/null +++ b/abstractvisualreasoningwithtangramshapes/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9ada7369fc7af4709ca0904dc7839b97083cddbac473219877ef9a633c8f4ff5 +size 493462 diff --git a/acenetattentionguidedcommonsensereasoningonhybridknowledgegraph/57a5fae5-dacd-40b3-98b8-04cf2deeee18_content_list.json b/acenetattentionguidedcommonsensereasoningonhybridknowledgegraph/57a5fae5-dacd-40b3-98b8-04cf2deeee18_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..b279b8e2d3c6a07f60f0784633d0ae2b46fd6e4d --- /dev/null +++ b/acenetattentionguidedcommonsensereasoningonhybridknowledgegraph/57a5fae5-dacd-40b3-98b8-04cf2deeee18_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e3d33e03d551233d42f955679bcfd81194bcbf61dd303a7adbcfc4202a401903 +size 76696 diff --git a/acenetattentionguidedcommonsensereasoningonhybridknowledgegraph/57a5fae5-dacd-40b3-98b8-04cf2deeee18_model.json b/acenetattentionguidedcommonsensereasoningonhybridknowledgegraph/57a5fae5-dacd-40b3-98b8-04cf2deeee18_model.json new file mode 100644 index 0000000000000000000000000000000000000000..a3e7f76082a3d5e5000e5b26d9ac5cab917e40f9 --- /dev/null +++ b/acenetattentionguidedcommonsensereasoningonhybridknowledgegraph/57a5fae5-dacd-40b3-98b8-04cf2deeee18_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:11303ed6614b3afe98136577453c39b437fdf43e5ff2bbe8f766773e757ba969 +size 92202 diff --git a/acenetattentionguidedcommonsensereasoningonhybridknowledgegraph/57a5fae5-dacd-40b3-98b8-04cf2deeee18_origin.pdf b/acenetattentionguidedcommonsensereasoningonhybridknowledgegraph/57a5fae5-dacd-40b3-98b8-04cf2deeee18_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..77dac6c935fd62f3bf0cf7a8c978ddfb162371a1 --- /dev/null +++ b/acenetattentionguidedcommonsensereasoningonhybridknowledgegraph/57a5fae5-dacd-40b3-98b8-04cf2deeee18_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5fa5cc9b3cee17d63458056b8b8b1c50af25ef13da32a847a2b0dc561866f51b +size 2562157 diff --git a/acenetattentionguidedcommonsensereasoningonhybridknowledgegraph/full.md b/acenetattentionguidedcommonsensereasoningonhybridknowledgegraph/full.md new file mode 100644 index 0000000000000000000000000000000000000000..652df4fbfaf137986bd89618e5d8b05e1b5273b9 --- /dev/null +++ b/acenetattentionguidedcommonsensereasoningonhybridknowledgegraph/full.md @@ -0,0 +1,334 @@ +# ACENet: Attention Guided Commonsense Reasoning on Hybrid Knowledge Graph + +Chuzhan Hao, Minghui Xie, and Peng Zhang* + +College of Intelligence and Computing, Tianjin University + +{chuzhanhao, minghuixie, pzhang}@tju.edu.cn + +# Abstract + +Augmenting pre-trained language models (PLMs) with knowledge graphs (KGs) has demonstrated superior performance on commonsense reasoning. Given a commonsense based QA context (question and multiple choices), existing approaches usually estimate the plausibility of candidate choices separately based on their respective retrieved KGs, without considering the interference among different choices. In this paper, we propose an Attention guided Commonsense rEasing Network (ACENet)1 to endow the neural network with the capability of integrating hybrid knowledge. Specifically, our model applies the multi-layer interaction of answer choices to continually strengthen correct choice information and guide the message passing of GNN. In addition, we also design a mix attention mechanism of nodes and edges to iteratively select supporting evidence on hybrid knowledge graph. Experimental results demonstrate the effectiveness of our proposed model through considerable performance gains across CommonsenseQA and OpenbookQA datasets. + +# 1 Introduction + +Commonsense question answering (CSQA) aims to answer questions based on the understanding of context and some background knowledge, which is the critical gap between the human intelligence and machine intelligence (Talmor et al., 2019). This capability of owning prior knowledge and reasoning is a foundation for communication and interaction with the world. Therefore, commonsense reasoning has become an important research task with various datasets and models proposed in this field (Mihaylov et al., 2018; Talmor et al., 2019; Bhagavatula et al., 2020; Feng et al., 2020; Yasunaga et al., 2021; Zhang et al., 2022). + +Question: What room is likely to have a sideboard on the counter? + +A. home B. serve food buffet C. dining room (X) D. living room E. kitchen (√) + +![](images/29ee97d3fdf38646a333ae62deda30787ec214dd00fa621e24045035f72aa70d.jpg) +Other models Subgraph w/o Interaction +Figure 1: Through the interaction between subgraphs, the correct choice information is continuously reinforced. The subgraph is retrieved from ConceptNet (Speer et al., 2017). The nodes with letter are the q-c pairs and connect to other nodes of their respective subgraphs. Yellow nodes correspond to entities mentioned in the question, green nodes correspond to those in the answer. The other nodes are some associated entities introduced when extracting the subgraph. + +![](images/88c17d334fffea843e405ff269cd9744eb0affd0eaca2853e510a9a4b2c3b27f.jpg) +Our model Subgraph w/ Interaction + +Recently, PLMs (Devlin et al., 2019) have made significant progress in many question answering tasks because of its powerful representation capability. Nevertheless, since commonsense knowledge is rarely stated by natural language (Gunning, 2018), this makes it hard for PLMs to learn commonsense knowledge from pre-training corpus. Therefore, many CSQA models augment the PLMs with various external knowledge sources (e.g., structured knowledge ConceptNet (Speer et al., 2017) and unstructured knowledge Wikipedia). Compared with unstructured knowledge, structured knowledge sources have the advantage of being easier to train and recover explicit evidence, which leads many researchers to leverage KGs to reason. + +A straightforward approach to leverage a KG is to directly model these relational paths (Santoro et al., 2017; Lin et al., 2019; Feng et al., 2020). Although path-based models have a strong interpretability, they are easily affected by the sparsity and scale of KGs. In addition, graph neural networks (GNNs) have achieved promising perfor + +mance on modeling KGs. Hence, GNNs are widely used to implicitly capture commonsense knowledge from KGs (Feng et al., 2020; Yan et al., 2021; Yasunaga et al., 2021; Zhang et al., 2022). + +However, these approaches have two main issues. First, they lack consideration of the interference effects between choices. In common KG-augmented models, the probability scores of candidate choices are calculated based on their respective reasoning subgraphs or paths separately, which is difficult to capture the nuance between the correct choice and distractors in commonsense questions. Second, the retrieved KGs contain a lot of noisy knowledge, which will mislead reasoning. QAGNN (Yasunaga et al., 2021) and JointLK (Sun et al., 2022) usually filter out the noise knowledge based on node features, but ignore the different significance of various edges which contain rich semantics. Wang et al. (2021) also proves the importance of edge features for commonsense reasoning. Therefore, we should capture the important features from many aspects (e.g., node, edge, graph and QA context). + +In response, we propose ACENet to capture the nuance of multiple choices by integrating the QA context and the external commonsense knowledge graphs. Given a QA context and multiple retrieved subgraphs of choices, we encode each q-c pair using PLM. Then the q-c pair is introduced into respective subgraphs as a global node (Ying et al., 2021). Knowledge is transmitted between subgraphs to construct a complete hybrid knowledge graph for reasoning (see § 3.2). First, we apply knowledge interaction layer to carry out the information interaction between subgraphs and guide GNN message passing. The layer is stacked to form a hierarchy that enables multi-layer interactions to recursively reinforce the important choice information in message passing (see Figure 1). Additionally, in order to further aggregate key features in the reasoning graph, we design a mix attention mechanism of nodes and edges to iteratively select supporting evidence based on the global node. Our model simultaneously leverage the hybrid knowledge of PLM, KGs and different choices to augment the commonsense reasoning ability. In summary, our contributions are as follows: + +- We propose a knowledge interaction layer to fuse the knowledge of PLM and different choices. The multi-layer interactions continuously strengthen correct choice information in the hybrid knowledge graph. + +- We design a mix attention mechanism of nodes and edges to iteratively select relevant knowledge over multiple layers of GNN. The global information of q-c pair is also introduced to enhance evidence selection. + +- Experimental results show that ACENet is superior to current KG-augmented methods. Through multi-layer interactions and multi-head attention guidance over hybrid knowledge graph, ACENet exhibits stronger performance in complex reasoning, such as solving questions with negation or more prepositions. + +# 2 Related Work + +Graph Neural Networks (GNNs). GNNs have been widely used to model knowledge graph due to its strong ability to process graph structured data. GNNs often follow a neighborhood aggregation and then message passing scheme (Gilmer et al., 2017). Recently, a lot of works on CSQA use GNN to model external KGs. MHGRN (Feng et al., 2020) transforms single-hop propagation into multi-hop propagation based on RGCN (Schlichtkrull et al., 2018). But it does not take into account the different importance of various nodes. QAGNN (Yasunaga et al., 2021), GreaseLM (Zhang et al., 2022), JointLK (Sun et al., 2022) use Graph Attention Network (GAT) (Velickovic et al., 2018) to represent knowledge graph. GAT is a commonly used variant of GNN, which performs attention-based message passing of node features. According to GSC (Wang et al., 2021), edge features play an essential role for commonsense reasoning. Hence, we design a mix attention mechanism of nodes and edges based on GAT. + +Question Answering with LM+KG. Although pre-trained language models have achieved great success in many NLP domains, they do not perform well on reasoning questions yet. Therefore, many works propose LM+KG methods for CSQA, which use knowledge graph as external knowledge source for PLMs. JAKET (Yu et al., 2020) aligns the entities and relations between questions and knowledge graph and fuses the two kind of representations. QAGNN (Yasunaga et al., 2021) introduces a context node as the bridge of PLMs and knowledge graph. The context node is initialized with the encoding of PLM. GreaseLM (Zhang et al., 2022) designs an interactive scheme to bidirectionally transfer the information from both the LM and KG in multiple layers. JointLK (Sun et al., + +2022) calculates the fine-grained attention weight between each question token and each KG node to strengthen the joint reasoning ability. They all focus on enhancing the fusion of two knowledge source, but lack consideration for the interference effects of different choices in QA context. + +# 3 Methodology + +The diagram of the proposed ACENet is shown in Figure 2. We assume a setting where each example in our data set contains a question $q$ and a set of answer choices $\{c_1, c_2, \dots, c_n\}$ . We derive the gold answer from QA context and relevant commonsense knowledge. Therefore, we retrieve a KG $\mathcal{G}$ as the source of commonsense knowledge following prior work (Feng et al., 2020). + +![](images/af425ee24e0260a117d14bd0717fdc31307ccf2054616160ba28fd77cb364e7c.jpg) +Figure 2: Overall architecture of our proposed ACENet. + +# 3.1 Knowledge Interaction Layer + +As shown in Figure 2, given a question and $n$ answer choices, we concatenate them to get $n$ q-c pair $[q; c_i]$ ( $i \in [1, n]$ ) separately. For each q-c pair, they will be as the inputs to feed through PLM. We use the "[CLS]" token output from PLM as a summary vector for each choice. + +Although PLMs can learn the general language representation well (Qiu et al., 2020) for each choice, it encodes each q-c pair separately, without considering inter-choice interference effects that are essential for the downstream commonsense question answering task. Our model begins to use the representation of each q-c pair to integrate external commonsense knowledge in respective subgraphs (see Figure 3). How to initialize the sum + +mary representation of each choice is crucial in minimizing distracting information being passed to the downstream supporting evidence selection and answer prediction tasks. + +Therefore, we propose a knowledge interaction layer (KIL shown in Figure 3) to strengthen the correct choice information. First we add a multi-head attention (Vaswani et al., 2017) KIL on top of the "CLS" tokens. This layer is defined as: + +$$ +\boldsymbol {\alpha} _ {i j} = \operatorname {M H A} \left(\boldsymbol {Q} ^ {t}, \boldsymbol {K} ^ {t}, \boldsymbol {V} ^ {t}\right) \tag {1} +$$ + +$$ +\boldsymbol {H} ^ {t} = \boldsymbol {\eta} \odot \tilde {\boldsymbol {H}} ^ {t} + (1 - \boldsymbol {\eta}) \odot (\boldsymbol {\alpha} _ {i j} \boldsymbol {V} ^ {t}) \tag {2} +$$ + +where $Q, K, V$ are interactive representations of all q-c pairs, which are linear projections from stacked embeddings of q-c pairs. MHA is the multi-head attention mechanism. $\alpha_{ij}$ is the attention weight between choices. $\eta = \sigma (\tilde{H}^t W + b)$ , $\sigma$ denotes the sigmoid activation function, $\odot$ represents the element-wise product, $\tilde{H}^t$ is the choice representations before passing through the $t$ -th KIL layer. Our motivation for adding attention across the q-c pairs generated from different choices is to encourage inter-choice interactions. By allowing choice representations to interact with each other, the model is able to train on a better input signal for message aggregation and passing. + +# 3.2 Hybrid Knowledge Graph + +To unify the knowledge of PLM and KGs into the same reasoning space and take advantage of both, we introduce the q-c pair into the extracted subgraphs $\mathcal{G}_i$ . Inspired by Gilmer et al. (2017) and Yasunaga et al. (2021), in hybrid knowledge graph, we add the q-c pair as a special node called [CN-node] to the $\mathcal{G}_i$ , and make connection between [CN-node] and each node individually. Each node in the $\mathcal{G}_i$ is divided into four types based on information sources: q-C node, Question entity node, Answer entity node and Retrieved entity node, referred to as $\mathcal{T} = \{\mathcal{C}, \mathcal{Q}, \mathcal{A}, \mathcal{R}\}$ . + +To further leverage the interference effects of different choices, the [CNode] node replaces various graph pooling functions to represent global information for each subgraph $\mathcal{G}_i$ . In the BERT model (Devlin et al., 2019), there is a similar token, i.e., [CLS], which is a special token attached at the beginning of each sequence, to represent the sequence-level feature on downstream tasks. Thus, we use the [CNode] node as a medium of interaction between subgraphs to achieve information transmission between internal choices. + +![](images/9e2b4654a17742dd12f986e15be354d28897d5bffc5132a8a4509d12c2dcf861.jpg) +Figure 3: The schematic diagram of Hybrid Knowledge Graph and Knowledge Interaction Layer. The retrieved nodes have been marked in the graph, where the correspondence between knowledge sources and graph nodes has been highlighted in the same color. The grey nodes are some associated entities in subgraph. + +We initialize the embedding of [CNode] with the representation of the q-c pair $(\mathcal{C}_i^0 = f_{KIL}(f_{LM}([q;c_i]))$ , and other nodes on $\mathcal{G}_i$ by their pre-trained entity embeddings prepared by Feng et al. (2020). In message aggregation and passing stage, the representation of [CNode] is updated as normal nodes in subgraph and the [CNode] aggregates the information from all nodes. Inspired by this, we can realize knowledge interaction between different subgraphs $\mathcal{G}_i$ and define the importance of evidence on $\mathcal{G}_i$ relying on $[\mathrm{CNode}]_i$ Hence, the global node can serve as a hub to help node communications and subgraph interactions, which can make each node more aware of the non-local information. Combining PLM, KGs and inter-choice interaction information, we construct a novel hybrid knowledge graph (see Figure 3). + +In the following subsections, we will conduct GNN message aggregation and passing over hybrid knowledge graph to score each choice. + +# 3.3 GNN Architecture + +Structured data like knowledge graph is much more efficient in representing commonsense compared with unstructured text (Xu et al., 2021). Therefore, we design a mix attention mechanism of nodes and edges to achieve iterative supporting evidence selection based on the reasoning graph $\mathcal{G}_i$ . Meanwhile, we also add the KIL between the layers of GNN to enhance global information interaction among choices (see KIL-GNN in Figure 2). + +Edge Encoding. To leverage edge information in supporting evidence selection and representa + +tion of the whole graph, we should capture the source/target node types and the edge types. Following Yasunaga et al. (2021), we first obtain the type embedding $u_{t}$ of each node $t$ , as well as the edge embedding $r_{st}$ from node $s$ to node $t$ by + +$$ +\boldsymbol {r} _ {s t} = f _ {r} \left(e _ {s t}, u _ {s}, u _ {t}\right) \tag {3} +$$ + +where $u_{s}, u_{t} \in \mathbb{R}^{\mathcal{T}}$ are one-hot embeddings indicating the node types of $s$ and $t$ , $e_{st} \in \mathbb{R}^{\mathcal{R}}$ is a one-hot embedding indicating the relation type of the edge $s \to t$ . Here we add self-loops for all nodes. $f_{r}: \mathbb{R}^{|\mathcal{R}| + 2|\mathcal{T}|} \to \mathbb{R}^{D}$ is a 2-layer MLP. We then compute the importance of each edge depending on [CNode] node in the reasoning process. + +Edge-Weighted Message Updating. Wang et al. (2021) points out that edge encoding is of vital importance for commonsense reasoning. To better encode effective edge features into message aggregation, each edge's weight is used to rescale information flow on that edge. Intuitively, an edge's weight signifies the edge's relevance for reasoning about the given task instance. Thus, We also use the global node [CNode] as global context to compute edge attention weights. + +Formally, the update rule of edges at layer $\ell$ is: + +$$ +\boldsymbol {w} _ {(i, j)} ^ {\ell} = f _ {w} ^ {\ell} \left(\left[ \mathcal {C} ^ {\ell}, \boldsymbol {r} _ {i j} ^ {\ell} \right]\right) \tag {4} +$$ + +$$ +\boldsymbol {A} _ {(i, j)} ^ {\ell} = \frac {e ^ {w _ {(i , j)} ^ {\ell}}}{\sum_ {(s , t) \in \epsilon} e ^ {w _ {(s , t)} ^ {\ell}}} \tag {5} +$$ + +$$ +\tilde {\boldsymbol {r}} _ {s t} ^ {\ell} = \sum_ {s \in \mathcal {N} _ {t} \cup \{t \}} \boldsymbol {A} _ {(s, t)} ^ {\ell} \boldsymbol {r} _ {s t} ^ {\ell} \tag {6} +$$ + +where $f_{w}^{\ell}$ is a 2-layer MLP. $\mathcal{N}_t$ is the set of node $t$ 's incoming neighbors. We then compute the complete node message from $s$ to $t$ as + +$$ +\tilde {h} _ {s} ^ {\ell} = f _ {m} \left(h _ {s} ^ {\ell}, \tilde {r} _ {s t} ^ {\ell}\right) \tag {7} +$$ + +where $f_{m}$ denotes a linear fully connected layer. $h_s^0$ is the initial embedding for node $s$ . + +The embedding of each node $s$ is updated as $\tilde{h}_s^\ell$ , which is related to the neighboring edges of node $s$ . For each edge neighbor, edge weight $A_{(i,j)}^\ell$ is used to rescale the edge's influence on message updating of node $s$ . Through this soft pruning method, we integrate the essential edge information into node features. In the following message aggregation and passing, the node features on the hybrid subgraph is strongly contextualized. + +Message Aggregation and Passing. For message passing, we use the multi-head attention GAT (Velickovic et al., 2018), which induces node representation through iterative message passing between neighbors on the graph. Specifically, in the $\ell$ -th layer of ACENet, we update the representation of each node $t$ to: + +$$ +\boldsymbol {h} _ {t} ^ {\ell + 1} = \prod_ {k = 1} ^ {K} f _ {n} \left(\sum_ {s \in \mathcal {N} _ {t} \cup \{t \}} \alpha_ {s t} ^ {k} \tilde {\boldsymbol {h}} _ {s} ^ {\ell}\right) \tag {8} +$$ + +where $||$ represents concatenation, $\alpha_{st}^{k}$ are normalized attention coefficients computed by the $k$ -th attention mechanism $(\alpha^{k})$ , $\mathcal{N}_t$ represents the neighborhood of an arbitrary node $t$ , and $f_{n}$ is a 2-layer MLP. Note that, in this setting, the final returned output, $h_t$ , will consist of the important edge-wise and node-wise features for each node. + +Then, we will use the multi-head attention to compute attention weight $\alpha_{st}$ from nodes to node $t$ . The query and key vectors can be obtained by + +$$ +\boldsymbol {q} _ {s} = f _ {q} \left(\tilde {\boldsymbol {h}} _ {s} ^ {\ell}\right), \boldsymbol {k} _ {t} = f _ {k} \left(\tilde {\boldsymbol {h}} _ {t} ^ {\ell}\right) \tag {9} +$$ + +where $f_{q}$ and $f_{k}$ are linear transformations. Experimental results also show that the degree feature of nodes is also crucial, thus we add the degree feature $d_{s}$ to the local node attention weight, which is computed as follows: + +$$ +\alpha_ {s t} = \frac {\exp \left(\gamma_ {s t}\right)}{\sum_ {t ^ {\prime} \in \mathcal {N} _ {s} \cup \{s \}} \exp \left(\gamma_ {s t ^ {\prime}}\right)} \cdot d _ {s}, \gamma_ {s t} = \frac {\boldsymbol {q} _ {s} \boldsymbol {k} _ {t}}{\sqrt {D}} \tag {10} +$$ + +Subgraph Information Interaction. In the above process, we execute message aggregation and passing of single layer GAT. [CNode] aggregates the + +information from other nodes of its subgraph in the message passing process. In order to further strengthen correct choice information and perception of the overall QA context, we add knowledge interaction layer between each layer of GAT to fuse the global representation $\mathcal{G}_i$ (shown in Figure 2). + +# 3.4 Answer and Explain + +We now discuss the learning and interactive process of ACENet instantiated for Commonsense QA tasks. By integrating the knowledge of PLM, the retrieved KGs and the interaction information of choices, we compute the probability of $c_{i}$ being the correct answer as: + +$$ +p \left(c _ {i} \mid q, c\right) \propto e x p \left(M L P \left(\mathcal {C} ^ {L M}, \mathcal {G} ^ {K I L}, \mathcal {G}\right)\right) \tag {11} +$$ + +where $c^{LM}$ is the initial embedding of the q-c pair through PLM, $\mathcal{G}^{KIL}$ is the knowledge interaction representation of q-c pair over different subgraphs, and $\mathcal{G}$ denotes attention-based pooling for last layer of GNN representation. + +The whole model is trained end-to-end jointly with the PLM (e.g., RoBERTa (Liu et al., 2019)) using the cross entropy loss. Finally, we choose the choice with the highest probability score as our answer choice. + +# 4 Experiments + +In this section, we conducted experiments over two commonsense QA benchmarks by answering the following research questions. + +- RQ1: Does ACENet outperform state-of-the-art baselines? +- RQ2: How do each model module and training data affect ACENet? +- RQ3: What is the performance of ACENet on different types of complex questions? +- RQ4: What is the intuitive performance of ACENet in the process of reasoning? + +# 4.1 Experimental Settings + +# 4.1.1 Datasets + +We conduct experiments to evaluate our approach on two commonsense QA benchmarks: CommonSenseQA and OpenBookQA. + +CommonsenseQA (Talmor et al., 2019) is a 5-way multiple-choice question answering dataset + +of 12,102 questions that require background commonsense knowledge beyond surface language understanding. The test set of CommonsenseQA is not publicly available, and model predictions can only be evaluated every two weeks via the official leaderboard. We perform our experiments using the in-house (IH) data split of Lin et al. (2019) to compare to baseline methods. + +OpenBookQA (Mihaylov et al., 2018) is a 4-way multiple-choice question answering dataset that tests elementary scientific knowledge. It contains 5,957 questions along with an open book of scientific facts. We use the official data split. Additionally, OpenBookQA also provides a collection of background facts in a textual form. We use the correspondence between these facts and their questions, prepared by Clark et al. (2020), as an additional input to the context module. + +# 4.1.2 Implementation Details + +Following previous work (Yasunaga et al., 2021), we use ConceptNet (Speer et al., 2017), a general-domain knowledge graph, as our structured knowledge source. Node embeddings are initialized using the entity embeddings prepared by Feng et al. (2020), which applies pre-trained LMs to all triples in ConceptNet and then obtains a pooled representation for each entity. Given each q-c pair (question and answer choice), we retrieve the top 200 nodes and adjacent edge according the node relevance score following Yasunaga et al. (2021). We set the dimension $(\mathrm{D} = 200)$ and number of our GNN layers $(\mathrm{L} = 5)$ , with dropout rate 0.2 applied to each layer (Srivastava et al., 2014). The batch size on CommonsenseQA and OpenBookQA is set from $\{64, 128\}$ . We train the model with the RAdam optimizer (Liu et al., 2020) using two GPUs (Tesla V100), which takes about 20 hours on average. We use separate learning rates for the LM module and the GNN module, which are set from $\{1e-5, 2e-5, 3e-5\}$ and $\{5e-4, 1e-3, 2e-3\}$ . The above hyperparameters are tuned on the development set. + +# 4.1.3 Compared Methods + +Although text corpus can provide complementary knowledge except for knowledge graphs, our model focuses on exploiting the ability of KG and the joint reasoning among different choices, LM and KG, so we choose LM+KG as the comparison methods. + +To further investigate the enhancement effects of KGs on CSQA tasks, we compare with a vanilla fine-tuned LM, which does not use the KG. We + +use RoBERTa-large for CommonsenseQA, and RoBERTa-large and AristoRoBERTa for OpenBookQA. In addition, the LM+KG methods share a similar high-level framework with our methods. They usually use LM as a text encoder, GNN or RN as the tool of KG message aggregation and passing. But the specific used knowledge and the joint reasoning methods are different: (1) RN (Santoro et al., 2017), (2) RGCN (Schlichtkrull et al., 2018), (3) GconAttn (Wang et al., 2019), (4) KagNet (Lin et al., 2019), (5) MHGRN (Feng et al., 2020), (6) HGN (Yan et al., 2021), (7) JointLK (Sun et al., 2022), (8) QAGNN (Yasunaga et al., 2021), (9) GREASELM (Zhang et al., 2022). (1), (2), (3) are relation-aware GNNs for KGs, and (4), (5) further model paths in KGs. (6) generates the missing edge of subgraphs for reasoning. (7), (8), (9) construct a joint reasoning graph, which can enhance the interaction of multi-modal knowledge. To be fair, we use the same LM for all comparison methods and our model. The key difference between ACENet and these are that they do not simultaneously consider the interference effects among choices or the importance of different edge and node features. + +# 4.2 Main Results (RQ1) + +The results on CommonsenseQA in-house split dataset are shown in Table 1. The results on OpenBookQA test dataset are shown in Table 2. We repeat each experiment 4 times and report the mean and standard deviation of accuracy. + +
MethodsIHdev-Acc. (%)IHtest-Acc. (%)
RoBERTa-large (w/o KG)73.07 (±0.45)68.69 (±0.56)
+RGCN72.69 (±0.19)68.41 (±0.66)
+GconAttn72.61 (±0.39)68.59 (±0.96)
+RN74.57 (±0.91)69.08 (±0.21)
+KagNet73.47 (±0.22)69.01 (±0.76)
+MHGRN74.45 (±0.10)71.11 (±0.81)
+HGN-73.64 (±0.30)
+QA-GNN76.54 (±0.21)73.41 (±0.92)
+JointLK77.88 (±0.25)74.43 (±0.83)
+GREASELM78.50 (±0.50)74.20 (±0.40)
+ACENet (Ours)78.54 (±0.45)74.72 (±0.70)
+ +Table 1: Performance comparison on CommonsenseQA in-house split. We follow the data division method of Lin et al. (2019) and report the in-house Dev (IHdev) and Test (IHtest) accuracy. + +As show in both datasets, our proposed model ACENet outperforms previous methods. We observe consistent improvements over fine-tuned LMs and existing LM+KG models. The boost over QA-GNN suggests that ACENet makes a better use + +of inter-choice interaction information than existing LM+KG methods. + +
MethodsRoBERTa-LargeAristoRoBERTa
Fine-tuned LMs (w/o KG)64.80 (±2.37)78.40 (±1.64)
+RGCN62.45 (±1.57)74.60 (±2.53)
+GconAttn64.75 (±1.48)71.80 (±1.21)
+RN65.20 (±1.18)75.35 (±1.39)
+MHGRN66.85 (±1.19)80.60
+JointLK70.34 (±0.75)84.92 (±1.07)
+QA-GNN67.80 (±2.75)82.77 (±1.56)
+GREASELM-84.80
+ACENet (Ours)70.47 (±0.12)83.40 (±0.14)
+ +# 4.3 Ablation Studies (RQ2) + +We further conduct specific experiments to investigate the effectiveness of different components in our model. + +Impact of Model Components. We add each model component individually and report the accuracy on the CommonsenseQA IHdev set in Table 3. Adding the edge&node attention mechanism leads to $0.79\%$ improvement in performance which shows that some nodes and edges are not conducive to reasoning. Additionally, when we add the KIL (GNN) module, the results have a significant improvement: $76.33\% \rightarrow 77.56\% (+1.23\%)$ , suggesting that the interaction of different choices is essential in the process of message passing. Meanwhile, our KIL (PLM) provides a better initial representation for the q-c pairs, which is also critical. + +Table 2: Test accuracy comparison on OpenBookQA. Methods with AristoRoBERTa use the textual evidence by Clark et al. (2020) as an additional input to the QA context. + +
ModelDev Acc.
None76.33
(a) w/ KIL(PLM)76.67
(b) w/ KIL(GNN)77.56
(c) w/ Edge&Node Attention77.12
(d) w/all (final)78.54
+ +Impact of Less Labeled Training Data. Table 4 shows the results of our model and baselines when trained with less training data on CommonsenseQA. Even in the case of less training data, our model still achieves the best test accuracy, which suggests that incorporating the knowledge of external KGs and multiple choices are helpful for commonsense + +reasoning under the low-resource setting. + +Table 3: Ablation study of our model components (adding one component each time), using the Common-senseQA IHdev set. + +
MethodsRoBERTa-Large
60%Train100%Train
LM Finetuning65.56 (±0.76)68.69 (±0.56)
RN66.16 (±0.28)70.08 (±0.21)
MHGRN68.84 (±1.06)71.11 (±0.81)
HGN71.10 (±0.11)73.64 (±0.30)
QA-GNN70.27 (±0.35)73.41 (±0.92)
GREASELM71.08 (±0.52)74.20 (±0.40)
ACENet (Ours)71.31 (±0.42)74.72 (±0.70)
+ +Table 4: Performance change (accuracy in the amounts of training data on CommonsenseQA IHtest set (same as Lin et al. (2019)). + +Impact of Number of Layers $(L)$ and Heads $(H)$ . To give further insight into the factors for the capacity of our models, we investigate the impact of the number of layers and heads in the reasoning process. The Figure 4 shows the performance of our model with different numbers of layers and heads. We can observe that increasing the number of layers and heads in a certain range improves the performance of our model. The intuitive explanation is that multiple heads help the model to focus multiple knowledge rules and at the same time multiple layers help the model to recursively select the relevant knowledge rules (Paul and Frank, 2020). + +![](images/f1b640e2e0028b33bdf4587c1ee815d3c7fd783aeccad48cbba4fdbcab2747c6.jpg) +Figure 4: Performance of ACENet model with different numbers of Heads and numbers of GNN Layers on CommonsenseQA IHdev set. + +However, performance begins to drop gradually when $\mathrm{H} = 1$ , 2 and $\mathrm{L} > 5$ or $\mathrm{H} = 4$ and $\mathrm{L} > 4$ . A widely accepted explanation for the performance degradation with increasing the layers of GNN is the over-smoothing effect (Chien et al., 2020). Therefore, we set $\mathrm{L} = 5$ , $\mathrm{H} = 2$ to optimally balance their utility. Compared with the baselines, our model achieves better results at different number of layers + +
ModelNegation TermNumber of Question PrepositionsNumber of Question Entities
w/o negationw/ negation01≥2≤10 entities>10 entities
Number11071145514642061012209
QA-GNN77.7871.9377.8676.5177.1876.9878.47
GREASELM79.3174.5679.3176.9480.5877.5783.73
ACENet (Ours)79.4975.4479.4977.5981.5678.6681.34
+ +Table 5: Performance on different types of complex questions. The questions are retrieved from the Common-senseQA IHdev set. + +![](images/f42b3cf4144e1f8c8c2e8d933412dfe8f2c2f2c1e82ed68463f8fd9502c26d2b.jpg) +Figure 5: Ablation study on stacked of GNN layers. + +(shown in Figure 5). + +# 4.4 Quantitative Analysis (RQ3) + +Given these overall performance improvements, we further analyze whether performance improvements were reflected in questions that required more complex reasoning. We define the reasoning complexity of different questions, such as questions with negation and complex questions with more prepositions and entities. We compare our model with the prior better baselines in Table 5. + +First, our model exhibits a big boost $(+3.51\%, +0.88\%)$ over QA-GNN and GREASELM for the questions with negation term (e.g., no, not, never, etc.), suggesting its strength in negative reasoning. Alternatively, the number of prepositions (e.g., in, on, of, with, etc.) in a question usually represents the number of explicit reasoning constraints. Our results in Table 5 demonstrate that our model generally outperforms the baselines for all questions with different number of prepositions. Additionally, the number of the question entities approximately indicates the scale of the retrieved reasoning graph. Our model achieves better results $(+1.68\%, +1.09\%)$ over QA-GNN and GREASELM for most of the questions ( $\leq 10$ entities). At the same time, our model and the prior best model, GREASELM perform comparably when aiming at larger scale + +retrieved graphs. + +# 4.5 Qualitative Analysis (RQ4) + +Figure 6 shows the choice-to-choice attention weights induced by the KIL layers of our model in different stages. Our model can strengthen the correct choice information in multi-layer interactions using external KGs to get the right answer, while QA-GNN and GREASELM make the incorrect predictions. We analyze whether different heads focus on multiple knowledge rules. In Figure 6, we observe that two heads focus the different choice-related knowledge in the message aggregation and passing process. First, the attention of two heads represent the key reasoning information in the first several KILs, but gradually averages out by the final layer. The head1 primarily focuses on "pay bills" in the different KILs, which provides strong evidence of reasoning for the correct answer. In addition, the attention weights on "buy food" and "get things" become higher in head2. It also helps our model to select the relevant knowledge. As a whole, our model integrates the different knowledge rules mined by each head to realize the correct prediction. + +# 4.6 Analysis of Experimental Results + +To explain why ACENet outperforms other baselines, our hypothesis is because of the receptive field of the subgraph nodes expanded with the interaction of multi-layer Knowledge Interaction Layers. And through the aggregation and propagation of multi-layer graph neural network each node can more aware of the non-local information. However, the work to explain the result of neural networks requires strenuous efforts. We can think differently and extend this method into more general settings in other tasks (e.g., document modeling, reading comprehension, information extraction, etc.) + +Question: August needed money because he was afraid that he'd be kicked out of his house. What did he need money to do? + +A. control people B. $\checkmark$ pay bills (Ours) C. hurt people D. $\times$ buy food (GREASELM) E. $\times$ get things (QA-GNN) + +![](images/6fc5f059b5c0596837d920e3e1203b2f82a3533187e4f4dd27c2df549de3f7b8.jpg) +Figure 6: Qualitative analysis of ACENet's inter-choice attention weight changes across multiple knowledge interaction layers in different heads. + +# 5 Conclusions + +In this paper, we propose a multi-head attention knowledge interaction layer to enhance correct choice information and capture nuances in different choices. Meanwhile, the mix attention mechanism of nodes and edges is introduced into message passing to iteratively select relevant knowledge in hybrid knowledge graph. Experimental results on CommonsenseQA and OpenBookQA demonstrate the superiority of ACENet over other LM+KG methods and the strong performance in handling complex questions. In future work, we plan to further investigate augmenting effects of knowledge graph for reasoning, and integrate neural and symbolic reasoning system to achieve dual system cognitive intelligence. + +# Limitations + +Although our model achieves competitive performance in commonsense question answering tasks, there are some methods and limitations that can be improved. The limitations of our study are summarized as follows: + +1) GNNs incorporates implicit external knowledge in the process of message aggregation and passing. Therefore, existing KGaugmented methods are usually not interpretable enough. +2) The optimal number of GNN layers in our model depends on experimental results. However, the scale of the knowledge graphs is often uncertain in real application scenarios. We can not guarantee that the specific number of + +GNN layers will achieve the appropriate performance. How to design the depth-adaptive GNNs for a balance between efficiency and effectiveness is a key challenge. + +3) At present, our model of using the interaction between choices to strengthen correct choice information is only suitable for question answering tasks with the limited scope. + +# Ethics Statement + +This paper proposes a general approach to fuse QA context, language models and external knowledge graphs for commonsense reasoning. We work within the purview of acceptable privacy practices and strictly follow the data usage policy. In all the experiments, we use public datasets and consist of their intended use. We have also described our experimental settings in detail which ensure the reproducibility of our method. We neither introduce any social/ethical bias to the model nor amplify any bias in the data, so we do not foresee any direct social consequences or ethical issues. + +# Acknowledgments + +This work is supported in part by Natural Science Foundation of China (grant No.62276188 and No.61876129), the Beijing Academy of Artificial Intelligence(BAAI), TJU-Wenge joint laboratory funding, and MindSpore 2. + +# References + +Chandra Bhagavatula, Ronan Le Bras, Chaitanya Malaviya, Keisuke Sakaguchi, Ari Holtzman, Han + +nah Rashkin, Doug Downey, Wen-tau Yih, and Yejin Choi. 2020. Abductive commonsense reasoning. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. +Eli Chien, Jianhao Peng, Pan Li, and Olgica Milenkovic. 2020. Adaptive universal generalized pagerank graph neural network. ArXiv preprint, abs/2006.07988. +Peter Clark, Oren Etzioni, Tushar Khot, Daniel Khashabi, Bhavana Mishra, Kyle Richardson, Ashish Sabharwal, Carissa Schoenick, Oyvind Tafjord, Niket Tandon, et al. 2020. From 'f'to 'a'on the ny regents science exams: An overview of the aristo project. AI Magazine, 41(4):39-53. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. +Yanlin Feng, Xinyue Chen, Bill Yuchen Lin, Peifeng Wang, Jun Yan, and Xiang Ren. 2020. Scalable multihop relational reasoning for knowledge-aware question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1295-1309, Online. Association for Computational Linguistics. +Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. 2017. Neural message passing for quantum chemistry. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pages 1263-1272. PMLR. +David Gunning. 2018. Machine common sense concept paper. ArXiv preprint, abs/1810.07528. +Bill Yuchen Lin, Xinyue Chen, Jamin Chen, and Xiang Ren. 2019. KagNet: Knowledge-aware graph networks for commonsense reasoning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2829-2839, Hong Kong, China. Association for Computational Linguistics. +Liyuan Liu, Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, and Jiawei Han. 2020. On the variance of the adaptive learning rate and beyond. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. + +Roberta: A robustly optimized bert pretraining approach. ArXiv preprint, abs/1907.11692. +Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. 2018. Can a suit of armor conduct electricity? a new dataset for open book question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2381-2391, Brussels, Belgium. Association for Computational Linguistics. +Debjit Paul and Anette Frank. 2020. Social commonsense reasoning with multi-head knowledge attention. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 2969-2980, Online. Association for Computational Linguistics. +Xipeng Qiu, Tianxiang Sun, Yige Xu, Yunfan Shao, Ning Dai, and Xuanjing Huang. 2020. Pre-trained models for natural language processing: A survey. Science China Technological Sciences, 63(10):1872-1897. +Adam Santoro, David Raposo, David G. T. Barrett, Mateusz Malinowski, Razvan Pascanu, Peter W. Battaglia, and Tim Lillicrap. 2017. A simple neural network module for relational reasoning. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 4967-4976. +Michael Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. 2018. Modeling relational data with graph convolutional networks. In European semantic web conference, pages 593-607. Springer. +Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA, pages 4444-4451. AAAI Press. +Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929-1958. +Yueqing Sun, Qi Shi, Le Qi, and Yu Zhang. 2022. JointLK: Joint reasoning with language models and knowledge graphs for commonsense question answering. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5049-5060, Seattle, United States. Association for Computational Linguistics. +Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. CommonsenseQA: A question answering challenge targeting commonsense knowledge. In Proceedings of the 2019 Conference of the North American Chapter of the Association for + +Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4149-4158, Minneapolis, Minnesota. Association for Computational Linguistics. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998-6008. +Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2018. Graph attention networks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net. +Kuan Wang, Yuyu Zhang, Diyi Yang, Le Song, and Tao Qin. 2021. Gnn is a counter? revisiting gnn for question answering. ArXiv preprint, abs/2110.03192. +Xiaoyan Wang, Pavan Kapanipathi, Ryan Musa, Mo Yu, Kartik Talamadupula, Ibrahim Abdelaziz, Maria Chang, Achille Fokoue, Bassem Makni, Nicholas Mattei, et al. 2019. Improving natural language inference using external knowledge in the science questions domain. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 7208-7215. +Yichong Xu, Chenguang Zhu, Ruochen Xu, Yang Liu, Michael Zeng, and Xuedong Huang. 2021. Fusing context into knowledge graph for commonsense question answering. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP* 2021, pages 1201-1207, Online. Association for Computational Linguistics. +Jun Yan, Mrigank Raman, Aaron Chan, Tianyu Zhang, Ryan Rossi, Handong Zhao, Sungchul Kim, Nedim Lipka, and Xiang Ren. 2021. Learning contextualized knowledge structures for commonsense reasoning. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 4038-4051, Online. Association for Computational Linguistics. +Michihiro Yasunaga, Hongyu Ren, Antoine Bosselut, Percy Liang, and Jure Leskovec. 2021. QA-GNN: Reasoning with language models and knowledge graphs for question answering. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 535-546, Online. Association for Computational Linguistics. +Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, and TieYan Liu. 2021. Do transformers really perform badly for graph representation? Advances in Neural Information Processing Systems, 34. + +Donghan Yu, Chenguang Zhu, Yiming Yang, and Michael Zeng. 2020. Jaket: Joint pre-training of knowledge graph and language understanding. ArXiv preprint, abs/2010.00796. +Xikun Zhang, Antoine Bosselut, Michihiro Yasunaga, Hongyu Ren, Percy Liang, Christopher D Manning, and Jure Leskovec. 2022. Greaselm: Graph reasoning enhanced language models for question answering. ArXiv preprint, abs/2201.08860. \ No newline at end of file diff --git a/acenetattentionguidedcommonsensereasoningonhybridknowledgegraph/images.zip b/acenetattentionguidedcommonsensereasoningonhybridknowledgegraph/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..85d23f78ab87279f2181a2b188c8708fb15477f1 --- /dev/null +++ b/acenetattentionguidedcommonsensereasoningonhybridknowledgegraph/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ffdd5f0fc3513fdb6eed2019907db2a62e51849f6be75185f8b7cdf9c0a757cd +size 461277 diff --git a/acenetattentionguidedcommonsensereasoningonhybridknowledgegraph/layout.json b/acenetattentionguidedcommonsensereasoningonhybridknowledgegraph/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..4237492667c17176e0c9d6e3e1345edd0f423f2d --- /dev/null +++ b/acenetattentionguidedcommonsensereasoningonhybridknowledgegraph/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:487184a0c26e97832ef0bc5d62c1a28194325d2d0db39ce2137a5604ee4edf67 +size 370418 diff --git a/acomprehensivecomparisonofneuralnetworksascognitivemodelsofinflection/d92cb11c-2d95-4641-9dad-f4f9fd8a5b1f_content_list.json b/acomprehensivecomparisonofneuralnetworksascognitivemodelsofinflection/d92cb11c-2d95-4641-9dad-f4f9fd8a5b1f_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..6c46cdf3d3ac4ae930710ec2f935080b7abe4665 --- /dev/null +++ b/acomprehensivecomparisonofneuralnetworksascognitivemodelsofinflection/d92cb11c-2d95-4641-9dad-f4f9fd8a5b1f_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:21d34d9fa21af35e106de7ec7a9e55048a4b5b37c897d1f45a80945757897670 +size 79845 diff --git a/acomprehensivecomparisonofneuralnetworksascognitivemodelsofinflection/d92cb11c-2d95-4641-9dad-f4f9fd8a5b1f_model.json b/acomprehensivecomparisonofneuralnetworksascognitivemodelsofinflection/d92cb11c-2d95-4641-9dad-f4f9fd8a5b1f_model.json new file mode 100644 index 0000000000000000000000000000000000000000..979b3278aaa0980c000d80aca2360d8b9cc061ac --- /dev/null +++ b/acomprehensivecomparisonofneuralnetworksascognitivemodelsofinflection/d92cb11c-2d95-4641-9dad-f4f9fd8a5b1f_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a398e2bc17c7a6c4b527a8ed9e90d1063a22f7db8d4dfba2cb61cdb62796b45c +size 92173 diff --git a/acomprehensivecomparisonofneuralnetworksascognitivemodelsofinflection/d92cb11c-2d95-4641-9dad-f4f9fd8a5b1f_origin.pdf b/acomprehensivecomparisonofneuralnetworksascognitivemodelsofinflection/d92cb11c-2d95-4641-9dad-f4f9fd8a5b1f_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..2ce0a58a3d2efe7c6b397f6fa9c84bfe5206836b --- /dev/null +++ b/acomprehensivecomparisonofneuralnetworksascognitivemodelsofinflection/d92cb11c-2d95-4641-9dad-f4f9fd8a5b1f_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5a5fd2191ce6cd89dd7e635e99e0743ffad7080481b6490d106f29a82c353bcf +size 1072784 diff --git a/acomprehensivecomparisonofneuralnetworksascognitivemodelsofinflection/full.md b/acomprehensivecomparisonofneuralnetworksascognitivemodelsofinflection/full.md new file mode 100644 index 0000000000000000000000000000000000000000..86cdd660defdf3ff2708c08ed742416d18d03ae4 --- /dev/null +++ b/acomprehensivecomparisonofneuralnetworksascognitivemodelsofinflection/full.md @@ -0,0 +1,297 @@ +# A Comprehensive Comparison of Neural Networks as Cognitive Models of Inflection + +Adam Wiemerslage and Shiran Dudy and Katharina Kann + +University of Colorado Boulder + +first.last@colorado.edu + +# Abstract + +Neural networks have long been at the center of a debate around the cognitive mechanism by which humans process inflectional morphology. This debate has gravitated into NLP by way of the question: Are neural networks a feasible account for human behavior in morphological inflection? We address that question by measuring the correlation between human judgments and neural network probabilities for unknown word inflections. We test a larger range of architectures than previously studied on two important tasks for the cognitive processing debate: English past tense, and German number inflection. We find evidence that the Transformer may be a better account of human behavior than LSTMs on these datasets, and that LSTM features known to increase inflection accuracy do not always result in more human-like behavior. + +# 1 Introduction: The Past Tense Debate + +Morphological inflection has historically been a proving ground for studying models of language acquisition. Rumelhart and McClelland (1985) famously presented a neural network that they claimed could learn English past tense inflection. However, Pinker and Prince (1988) proposed a dual-route theory for inflection, wherein regular verbs are inflected based on rules and irregular verbs are looked up in the lexicon. They highlighted several shortcomings of Rumelhart and McClelland (1985) that they claimed any neural network would suffer from. + +This opened a line of work wherein cognitive theories of inflection are analyzed by implementing them as computational models and comparing their behavior to that of humans. A famous study in the area of morphology is the wug test (Berko, 1958), where human participants are prompted with a novel-to-them nonce word and asked to produce its plural form. Similarly, morphological inflection models are generally evaluated on words they have + +![](images/232c9072fd0a248f33b2727e05ab5fdd0476b3358cff9c712b1005b52b89022e.jpg) +Figure 1: Summary of the past tense debate as it pertains to this work, color coded by evidence for (blue) or against (red) neural networks as a cognitively plausible account for human behavior. + +not seen during training. However, since they are evaluated on actual words, it is impossible to meaningfully ask a native speaker, who knows the words' inflected forms, how likely different reasonable inflections for the words in a model's test data are. Thus, in order to compare the behavior of humans and models on words unknown to both, prior work has created sets of made-up nonce words (Marcus et al., 1995; Albright and Hayes, 2003). + +English Past Tense English verbs inflect to express the past and present tense distinction. Most verbs inflect for past tense by applying the /-d/, /-id/, or /-t/ suffix: allophones of the regular inflection class. Some verbs, however, express the past tense with a highly infrequent or completely unique inflection, forming the irregular inflection class. This distinction between regular and irregular inflection has motivated theories like the dual-route theory described above. + +Prasada and Pinker (1993) performed a wug test for English past tense inflection in order to compare the model from Rumelhart and McClelland (1985) to humans with special attention to how models behave with respect to regular vs. irregular forms, finding that it could not account for human generalizations. Albright and Hayes (2003, A&H) gathered production probabilities – i.e., the normal + +ized frequencies of the inflected forms produced by participants – and ratings – i.e., the average rating assigned to a given past tense form on a well-formedness scale. They then implemented two computational models: a rule-based and an analogy-based model and computed the correlation between the probabilities of past tense forms for nonce verbs under each model and according to humans. They found that the rule-based model more accurately accounts for nonce word inflection. + +After several years of progress for neural networks, including state-of-the-art results on morphological inflection (Kann and Schütze, 2016; Cotterell et al., 2016), this debate was revisited by Kirov and Cotterell (2018, K&C), who examined modern neural networks. They trained a bidirectional LSTM (Hochreiter and Schmidhuber, 1997) with attention (Bahdanau et al., 2015) on English past tense inflection and in experiments quantifying model accuracy on a held out set of real English verbs, they showed that it addresses many of the shortcomings pointed out by Pinker and Prince (1988). They concluded that the LSTM is, in fact, capable of modeling English past tense inflection. They also applied the model to the wug experiment from A&H and found a positive correlation with human production probabilities that was slightly higher than the rule-based model from A&H. + +Corkery et al. (2019, C&al.) reproduced this experiment and additionally compared to the average human rating that each past tense form received in A&H's dataset. They found that the neural network from K&C produced probabilities that were sensitive to random initialization – showing high variance in the resulting correlations with humans – and typically did not correlate better than the rule-based model from A&H. They then designed an experiment where inflected forms were sampled from several different randomly initialized models, so that the frequencies of each form could be aggregated in a similar fashion to the adult production probabilities – but the results still favored A&H. They hypothesized that the model's overconfidence in the most likely inflection (i.e. the regular inflection class) leads to uncharacteristically low variance on predictions for unknown words. + +German Noun Plural McCurdy et al. (2020a, M&al.) applied an LSTM to the task of German noun plural inflection to investigate a hypothesis from Marcus et al. (1995, M95), who attributed the outputs of neural models to their susceptibility to + +the most frequent pattern observed during training, stressing that, as a result, neural approaches fail to learn patterns of infrequent groups. + +German nouns inflect for the plural and singular distinction. There are five suffixes, none of which is considered a regular majority: /-(e)n/, /-e/, /-er/, /-s/, and /-Ø/. M95 had built a dataset of monosyllabic German noun wugs and investigated human behavior when inflecting the plural form, distinguishing between phonologically familiar environments (rhymes), and unfamiliar ones (non-rhymes). The German plural system, they argued, was an important test for neural networks since it presents multiple productive inflection rules, all of which are minority inflection classes by frequency. This is in contrast to the dichotomy of the regular and irregular English past tense. M&al. collected their own human production probabilities and ratings for these wugs, and then compared those to LSTM productions. Humans were prompted with each wug with the neuter determiner to control for the fact that neural inflection models of German noun plurals are sensitive to grammatical gender (Goebel and Indefrey, 2000), and because humans do not have a majority preference for monosyllabic, neuter nouns (Clahsen et al., 1992). + +The /-s/ inflection class, which is highly infrequent appears in a wide range of phonological contexts, which has lead some research to suggest it is the default class for German noun plurals, and thus the regular inflection, despite its infrequent use. M&al. found that it was preferred by humans in Non-Rhyme context more than Rhymes, but the LSTM model showed the opposite preference, undermining the hypothesis that LSTMs model human generalization behavior. /-s/ was additionally predicted less accurately on a held-out test set of real noun inflections when compared to other inflection classes. + +They found that the most frequent inflection class in the training for the monosyllabic neuter contexts, $/ - \mathrm{e} / ,$ was over-generalized by the LSTM when compared to human productions. The most frequent class overall, $f - (e)n /$ (but infrequent in the neuter context), was applied by humans quite frequently to nonce nouns, but rarely by the LSTM. They additionally found that $/ - \mathrm{er} / ,$ which is as infrequent as $/ - \mathrm{s} / ,$ could be accurately predicted in the test set, and the null inflection $/ - \emptyset /$ which is generally frequent, but extremely rare in the monosyllabic, neuter setting was never predicted for the + +wugs. We refer to McCurdy et al. (2020a) for more details on the inflection classes and their frequencies, and additional discussion around their relevance to inflection behavior. + +Ultimately, M&al. reported no correlation with human production probabilities for any inflection class. They concluded that modern neural networks still simply generalize the most frequent patterns to unfamiliar inputs. + +Dankers et al. (2021) performed in-depth behavioral and structural analyses of German noun plural inflection by a unidirectional LSTM without attention. They argued that these modeling decisions made a more plausible model of human cognition. In a behavioral test they found that, like humans but unlike M&al., their model did predict $/-\mathrm{s}/$ more for non-rhymes than for rhymes, but the result was not statistically significant. They also found that $/-\mathrm{s}/$ was applied with a high frequency and attributed this to sensitivity to word length. For a visual of all studies discussed in this section, see Figure 1. + +Our Contribution Most work on modern neural networks discussed here analyzes the same bidirectional LSTM with attention and draws a mixture of conclusions based on differing experimental setups. Dankers et al. (2021) changed the LSTM-based architecture, and found somewhat different results for German number inflection, though they did not investigate correlations with human ratings nor production probabilities in the same way as previous work. The limited variation of architectures in previous studies as well as inconsistent methods of comparison with human behavior prevent us from drawing definite conclusions about the adequacy of neural networks as models of human inflection. + +Here, we present results on a wider range of LSTMs and a Transformer (Vaswani et al., 2017) model for both English past tense and German number inflection. We ask which architecture is the best account for human inflection behavior and, following M&al., investigate the actual model productions (and probabilities) for the German plural classes in order to qualitatively compare to human behavior. We additionally ask how architectural decisions for the LSTM encoder-decoder affect this correlation. Finally, we investigate the relationship between inflection accuracy on the test set and correlation with human wug ratings. + +We find that the Transformer consistently correlates best with human ratings, producing probabilities that result in Spearman's $\rho$ in the range of + +0.47-0.71 for several inflection classes, which is frequently higher than LSTMs. However, when looking closely at the Transformer productions, it displays behavior that deviates from humans similarly to the LSTM in M&al., though to a lesser extent. While attention greatly increases LSTM accuracy on inflection, we also find that it does not always lead to better correlations with human wug ratings, and that the directionality of the encoder has more complicated implications. Finally, we find that there is no clear relationship between model accuracy and correlation with human ratings across all experiments, demonstrating that neural networks can solve the inflection task in its current setup without learning human-like distributions. While the Transformer experiment in this work demonstrates stronger correlations with human behavior, and some more human-like behaviors than before, our findings continue to cast doubt on the cognitive plausibility of neural networks for inflection. + +# 2 Neural Morphological Inflection + +# 2.1 Task Description + +The experiments in this paper are centered around a natural language processing (NLP) task called morphological inflection, which consists of generating an inflected form for a given lemma and set of morphological features indicating the target form. It is typically cast as a character-level sequence-to-sequence task, where the characters of the lemma and the morphological features constitute the input, while the characters of the target inflected form are the output (Kann and Schütze, 2016): + +$$ +\mathrm {P S T} \mathrm {c r y} \rightarrow \mathrm {c r i e d} +$$ + +Formally, let $S$ be the paradigm slots expressed in a language and $l$ a lemma in the language. The set of all inflected forms – or paradigm – $\pi$ of $l$ is then defined as: + +$$ +\pi (l) = \left\{\left(f _ {k} [ l ], t _ {k}\right) \right\} _ {k \in \mathcal {S}} \tag {1} +$$ + +$f_{k}[l]$ denotes the inflection of $l$ which expresses tag $t_k$ , and $l$ and $f_{k}[l]$ represent strings consisting of letters from the language's alphabet $\Sigma$ . + +The task of morphological inflection can then formally be described as predicting the form $f_{i}[l]$ from the paradigm of $l$ corresponding to tag $t_{i}$ . + +# 2.2 Models + +Rumelhart and McClelland The original model of Rumelhart and McClelland (1985) + +preceded many of the features introduced by modern neural networks. For example, they use a feed-forward neural network to encode input sequences. This creates the requirement of coercing variable-length inputs into the fixed-size network. To solve this, they encode input words as fixed length vectors representing the phonological distinctive feature sets for each trigram in that word. The neural network is then trained to map the features of an input form to a feature vector of a hypothesized output form. The loss is computed between the input feature sets and the feature set for an inflected output form encoded in the same way. At test time, they manually select candidate output forms for each input lemma in order to overcome the intractable decoding problem. The output form, then, is the candidate whose feature vector most closely resembles the model output. Beyond decoding problems, the order of input characters is not encoded, and unique words are represented with potentially identical phonological features. + +LSTM The LSTM architecture (Hochreiter and Schmidhuber, 1997) overcomes several of the issues in Rumelhart and McClelland (1985), by way of a recurrent encoding and decoding mechanism, and reliance on character embeddings. + +We experiment with several variations of the LSTM encoder-decoder (Sutskever et al., 2014; Cho et al., 2014) to test their behavior compared to humans. First, we vary directionality of the encoder under the assumption that bidirectional encoding leads to higher accuracy, but a unidirectional encoder may better resemble human processing. We additionally vary whether or not attention is used. Attention is typically a crucial feature to attaining high inflection accuracy. We expect that the same may also be true for assigning a cognitively plausible probability to a nonce inflection, by supplying the model with a mechanism to focus on only the relevant parts of the inflection. + +This yields 4 LSTM-based variations. We refer these models as BiLSTMAttn (BA; from K&C, C&al., and M&al.), UniLSTMAttn (UA), BiLSTMNoAttn (BN), and UniLSTMNoAttn (UN; from Dankers et al. (2021)). + +Transformer Finally, we present results for a Transformer sequence-to-sequence model (Vaswani et al., 2017), following the implementation proposed for morphological inflection by + +Wu et al. (2021). Unlike LSTM-based models, the transformer employs a self-attention mechanism such that each character representation can be computed in parallel as a function of all other characters. The position of each character is encoded with a special positional embedding. This means that the relation between each character in a word can be represented directly, rather than through a chain of functions via the LSTM recurrence. It is considered to be state-of-the-art for morphological inflection in terms of accuracy, which makes it an important comparison for this study. Some work has called into question the cognitive plausibility of transformer self-attention in psycholinguistic experiments of word-level language models (Merkx and Frank, 2020) – claiming that the direct access it provides to past input is cognitively implausible. It is not clear though that these arguments apply to character-level models for inflection, wherein words do not necessarily need to be processed one character at a time. + +Hyperparameters We implement all LSTMs with pytorch (Paszke et al., 2019) and borrow hyperparameters from previous work on morphological inflection. For the LSTMs, we use the hyperparameters from K&C, which were based on the tuning done by Kann and Schütze (2016). For the Transformer, we follow the hyperparameters from the best model in Wu et al. (2021), but set label-smoothing to 0. In preliminary experiments, we found no significant impact of label smoothing on accuracy nor correlation with human behavior across inflection classes. + +For all architectures, we follow C&al. and train 10 randomly initialized models. At test time, we decode with beam search with a width of 12. We train for up to 50 epochs because the architectures with fewer parameters tend to converge more slowly. + +MGL A&H implement the Minimal Generalization Learner (MGL), which learns explicit rules (e.g. insertion of /-id/ if a verb ends in a /t/ or /d/) at varying levels of granularity. Each rule is associated with a confidence score for a given phonological environment based on its statistics in the train set. At test time, the rule with the highest confidence is applied to produce an inflection, and the confidences can be used to score various regular or irregular inflected forms. We compare to this model for English data, following previous work. + +
Dev AccTest AccProd. Prob.Rating
regirregregirregregirreg
A&H MGL-99.738.0.33.30.50.49
K&C*-98.928.6.48.45--
C&al. Agg.**---.45.19.43.31
BiLSTMAttn93.3397.48 (.65)9.05 (5.24).28.36.16.46
BiLSTMNoAttn76.3782.72 (2.06)7.62 (3.33).14.44.23.35
UniLSTMAttn92.4596.53 (.68)20.00 (4.38).35.41.40.32
UniLSTMNoAttn73.4977.72 (1.64)10.48 (10.24).22.43.28.34
Transformer94.8899.21 (.53)10.95 (11.46).38.47.58.58
+ +*Trained and tested a different random split, **Trained and tested on all training data + +Table 1: English results for both regular (reg) and irregular (irreg) inflections for all architectures and metrics. Along with accuracy, we report Spearman's $\rho$ between average model rating and our two human metrics. Standard deviations are given in parentheses. + +# 3 Experiments + +# 3.1 Languages and Data + +We use the same data as previous work on English past tense, and German number inflection. + +English We experiment with the English past tense data from A&H, following both K&C and C&al. For training, we split the CELEX (Baayen et al., 1996) subset produced by A&H, consisting of 4253 verbs (218 irregular), into an 80/10/10 random train/dev/test split following K&C. $^{1}$ We ensure that $10\%$ of the irregular verbs are in each of the development and test set. + +The English nonce words from A&H, used for computing the correlation of model rating with human ratings and production probabilities, comprise 58 made-up verb stems, each of which has 1 regular and 1 irregular past tense inflection. 16 verbs have an additional irregular form (58 regulars and 74 irregulars total). All English data is in the phonetic transcription provided by A&H. + +German We also experiment with the German dataset from McCurdy et al. (2020a), who released train/dev/test splits consisting of 11,243 pairs of singular and plural nouns in the nominative case taken from UniMorph (McCarthy et al., 2020). They added gender, the only inflection feature provided, by joining UniMorph with a Wiktionary scrape. + +The German wugs come from M95, who built a set of 24 monosyllabic nonce nouns: 12 of which are rhymes – resembling real words in their phonology, and 12 of which are non-rhymes – representing atypical phonology. Human ratings and production probabilities, however, are taken from M&al., who + +administered an online survey to 150 native German speakers. Each participant was prompted with the nouns from M95 with the neuter determiner, and then asked to generate the plural form. Similar to A&H, after producing a plural for each noun, participants were asked to rate the acceptability of each potential plural form on a 1-5 scale. In their analysis, M&al. compare human and model behavior on 5 productive inflection classes, shown for our experiments in Table 3. + +# 3.2 Evaluation Metrics + +We evaluate models with respect to four metrics. + +Accuracy This refers to raw accuracy on a set of real inflections that the model has not seen during training. Crucially, only the top prediction of a given model is considered, and the model's probability distribution over all predictions does not affect the score. + +F1 We report F1 instead of accuracy for the German plural experiments following M&al. Here we classify each inflected form with its suffix (e.g. /-s/), and classify inflections that do not conform to the 5 inflection classes from M&al. as "other." + +Production Probability Correlation Like previous work (Kirov and Cotterell, 2018; Corkery et al., 2019; McCurdy et al., 2020a), we compare model output probabilities with production probabilities from humans. The production probability of a form is calculated by counting all forms produced for a given lemma, and then normalizing them to obtain a probability distribution of the human productions. In keeping with most previous work and because we do not expect a linear relationship with the model ratings, we report Spearman's $\rho$ . This is calculated within each inflection class, + +meaning that, e.g., for English we report a regular and an irregular $\rho$ . For example, the regular $\rho$ for the set of lemmas {rife, drize, flidge} would be computed from the vector containing probabilities of the forms {rifed, drized, flidged} under the model, against the corresponding vector with human probabilities. + +Rating Correlation Finally, we compare model ratings to the average human rating of each form, again reporting $\rho$ within inflection class. Here, rather than normalizing over production frequencies, humans were prompted with an inflection for a given lemma and asked to rate it on a scale that differed slightly between datasets. For each lemma, we thus get an average probability for a regular form, as well as for an irregular form. + +# 3.3 Neural Network Wug Test + +In order to compare our models to humans, we compute analogous values to the human ratings and production probabilities. We investigate two strategies: normalizing the inflected form counts output by our models, and computing the average probability of each form under our models. + +Model Production Probability Previous work (Corkery et al., 2019; McCurdy et al., 2020a) decoded outputs from multiple models and aggregated the resulting forms: given a lemma and a set of $n$ models trained with different random seeds, an inflected form is sampled from each model, resulting in forms $f_{1},\ldots ,f_{n}$ , where forms need not be unique. The frequency of each form is then normalized to obtain a probability distribution. For example, given the nonce lemma rife, the probability of the past tense form rifed is computed as + +$$ +\frac {1}{n} \sum_ {i = 1} ^ {n} \left\{ \begin{array}{l l} 1, & \text {i f} f _ {i} = \text {r i f e d} \\ 0, & \text {o t h e r w i s e} \end{array} \right. +$$ + +C&al. propose a version of this in their aggregate model, in which they sample 100 forms from each model, and normalize the resulting form frequencies. M&al., who instead train 25 randomly initialized models, perform the same aggregation over the top prediction of each model. We take the approach of M&al. (though we train only 10 models) to investigate model productions qualitatively. This metric is intuitively similar to quantifying human production probabilities if we consider one model to be one human participant. + +Model Rating Because the aggregate outputs method considers only the most likely prediction aggregated over the same architecture trained on the same dataset, we expect the prediction to typically be the same for each model. We instead report correlations with the probability of inflected forms under each model in Tables 1 and 3. K&C correlate this value with human production probabilities, and C&al. use this method in an experiment to compute individual model ratings. + +More formally, given a lemma $l$ and an inflected form $f$ of length $k$ , we compute + +$$ +\begin{array}{l} p (f \mid l) = p \left(f _ {1}, \dots , f _ {k} \mid l\right) (2) \\ = \prod_ {1} ^ {k} p \left(f _ {i} \mid f _ {i - 1}, l\right) (3) \\ \end{array} +$$ + +Where $f_{i}$ is the $i$ th character of $f$ . We force the model to output each inflected form $f$ to get its probability. In practice, we modify Equation 3 to compute a length-normalized probability because $p(f \mid l)$ becomes smaller as $f$ increases in length. For $f$ of length $k$ , we have + +$$ +p (f \mid l) = \sqrt [ k ]{\prod p \left(f _ {i} \mid f _ {i - 1} , l\right)} \tag {4} +$$ + +We expect computing ratings in this way to be similar to the aggregate model of C&al. described above. That is, the probability of a form $f$ computed by aggregating $n$ forms from a single model's probability distribution should approach $p(f \mid l)$ , as $n \to \infty$ . Finally, we compute the average probability of a form from all 10 randomly initialized models, and refer to it as the model rating. + +# 4 Results + +We present experimental results in Tables 1, 2, and 3 in terms of both inflection accuracy, and correlation with human behavior – our main focus. All correlations for neural models trained in this work are given with respect to model rating, and not the model production probability. We report results from training MGL on our data, and include the results reported K&C, C&al., and M&al. in the appropriate tables for reference. + +# 4.1 English + +For English, many of our models correlate better for irregulars than regulars, unlike previous work for which the strongest correlations occurred for regular verbs. As we do not have the same train + +
Dev Acc.Test F1
/-(e)n//-e//-∅//-er//-s/other
M&al.92.1095.0087.0092.0084.0060.0042.00
BiLSTMAttn89.3793.93 (0.6)88.08 (0.9)92.43 (0.6)79.07 (5.1)51.75 (4.6)45.36 (4.0)
BiLSTMNoAttn54.6574.16 (1.9)63.56 (2.4)75.57 (2.1)51.26 (3.7)29.58 (7.4)9.07 (0.6)
UniLSTMAttn86.4093.39 (0.6)87.35 (1.0)92.49 (1.1)69.78 (5.3)52.36 (4.5)44.06 (5.8)
UniLSTMNoAttn48.7169.69 (2.2)58.31 (2.4)71.98 (1.7)46.64 (5.2)32.54 (7.7)8.08 (0.4)
Transformer91.0492.93 (0.4)87.81 (0.7)93.86 (0.3)65.44 (4.7)57.89 (2.0)57.47 (4.5)
+ +Table 2: Average German F1s on all German plural inflections for all architectures. Standard deviation is given in parentheses. Dev accuracy for our experiments were decoded greedily. + +
Prod. ProbRating
-(-e)n/-/-e/-/-∅/-/-er/-/-s/avg.-(-e)n/-/-e/-/-∅/-/-er/-/-s/avg.
M&al..28.13-.05.33.20------
BiLSTMAttn.11.08-.14.24.38.20.36.44.06.36.39.32
BiLSTMNoAttn.44.08-.12.27.39.30.51.16-.29.30.31.20
UniLSTMAttn.09.16-.13.36.39.25.22.27-.16.46.44.25
UniLSTMNoAttn.14.15.08.17.23.15.24.16-.17.05.20.10
Transformer.11.30-.13.28.50.20.48.59.15.50.71.49
+ +Table 3: German wugs Spearman's $\rho$ for the average rating of each model with human production probabilities (left) and average human ratings (right). We report the macro average (avg.) over all inflection classes for both. + +and test splits, it is difficult to draw conclusions from this result. We predominately focus on performance differences between the models trained in our experiments including MGL. + +Accuracy The accuracies from this experiment generally reflect our expectations from prior work. The Transformer attains the highest test accuracy, LSTMs with attention always achieve higher accuracy than those without, and bidirectional LSTMs show modest improvements over their unidirectional counterparts. However, the unidirectional LSTMs outperform bidirectional counterparts for irregular accuracy (+2.86 and +10.95). Additionally, the Transformer has a low irregular accuracy, though with a very high standard deviation over all 10 runs, indicating at least one run was an outlier with much higher accuracy. + +Correlation The trend in accuracy for attentional LSTMs is not strictly true for correlation. LSTMs without attention typically correlate with humans slightly better than their attentional counterparts for irregulars. Additionally, unidirectional models result in higher regular correlations, which is in contrast to the higher irregular accuracy. Irregular correlations are fairly similar across LSTMs with the exception of the BiLSTM correlation with human ratings, which is much higher than the other LSTM correlations. We also reproduce previous results showing that A&H's rule-based model, + +MGL, is better correlated than any LSTM model. The transformer, however, is correlated most highly with humans among all experiments that we ran. + +# 4.2 German + +We refer to F1 in Table 2, and correlation with humans in Table 3. Notably all models typically correlate better with human ratings than with production probabilities, though those two metrics have a positive linear relationship $(r = 0.75)$ . Intuitively, the task of assigning a probability to a form is more like the human rating task than decoding the single most likely form. We present a graph of model production probabilities and model ratings for the German wugs in Figure 2. + +F1 F1 scores follow a similar trend to English. In contrast to the very small performance gap in Dankers et al. (2021), LSTMs with attention clearly perform better in terms of F1 than without – though our training dataset from M&al. is much smaller than the one they used, which might amplify the gap. Directionality has much less effect on F1 than attention for German, with the unidirectional LSTMs actually outperforming the bidirectional ones for the infrequent /-s/ class in our experiments. The Transformer attains high (though not necessarily the highest) F1 scores for every class. + +Correlation The Transformer clearly correlates most highly with human ratings, attaining a moderate correlation (0.48-0.59) for $/ - \mathrm{e} / , / - (\mathrm{e})\mathrm{n} / ,$ and $/ - \mathrm{er} / ,$ + +![](images/32c67a715e1e3d2ebde9f965ae9b7aa29f8d26bf5926d2966a054ef7cd4c38ea.jpg) +Figure 2: German plural productions (left) and average probabilities (right) for each architecture in Rhyme (R) and Non-Rhyme (NR) contexts for all lemmas and all random initializations. Shorthands are used for architectures - UA refers to UniLSTMAttn, whereas BN refers to BiLSTMAttn, for example. + +![](images/b64dfdb3573145727fd19fe41b74e8f88935018fb522aad397a2267f6be44319.jpg) + +and a high correlation (0.71) for $/ - \mathrm{s} /$ . All architectures correlate poorly with $/ - \emptyset /$ , despite very high F1. Looking more closely at $/ - \emptyset /$ , it consistently receives very low ratings (as is the case for human ratings), and it was never produced as a model's best output as can be seen in Figure 2. However, there are only $3 / - \emptyset /$ inflections in the training data that fit the same phonological context as the wugs. Across all contexts though, $/ - \emptyset /$ is a very common inflection in the training data, which explains its high accuracy on the test set. + +There is no clear trend between LSTMs in terms of correlation with human production probability, with rather low $\rho$ overall. However in the case of human ratings, LSTMs with attention always correlate better than those without, with the exception of the most frequent class overall in the training data, $/-(\mathrm{e})\mathrm{n}/$ . BiLSTMNoAttn is most strongly correlated for $/-(\mathrm{e})\mathrm{n}/$ , in contrast to its lower F1 - demonstrating that removing attention leads to a lower F1, but also to a more human-like probability of $/-(\mathrm{e})\mathrm{n}/$ . + +Regarding directionality, unidirectional LSTMs always outperform their bidirectional counterparts for the infrequent /-s/ class in our experiments. UniLSTMAttn correlates better with humans than any LSTM for the infrequent classes /-er/ and /-s/. However, BiLSTMAttn has the highest correlation for the frequent /-e/ and /-(e)n/. + +# 5 Analysis + +We mainly analyze the correlation between (average) model ratings and human ratings. We find that the Transformer correlates best with human ratings with few exceptions, indeed it attains a statistically significant positive correlation for all inflection classes in both languages, with the exception of $/ - \emptyset /$ in German. It is also highly accurate, as in previous work (Wu et al., 2021). + +Regarding LSTM architectural decisions, unsurprisingly, attention and bidirectionality typically increase accuracy in both languages. The positive effect of attention is similar for correlations with some exceptions. Attention almost always leads to better correlations in German, with the interesting exception of $/-(\mathrm{e})\mathrm{n}/$ . Given that humans rate $/-(\mathrm{e})\mathrm{n}/$ most highly on average, the higher correlation could be because without attention, LSTMs are very sensitive to the high $/-\mathrm{en}/$ frequency in the training set. The attentional LSTMs might learn the monosyllabic, neuter context that applies to the wugs, for which there are very few $/-(\mathrm{e})\mathrm{n}/$ training examples. Despite slightly higher accuracy for bidirectional LSTMs, unidirectional LSTMs tend to attain higher correlations with both human metrics for English, especially for the more frequent regular inflections. + +Conversely in German, the bidirectional LSTMs correlate better for the more frequent $/-(\mathrm{e})\mathrm{n}/$ and $/ - \mathrm{e}/$ classes, but UniLstmAttn correlates better for the rarer $/ - \mathrm{er}/$ and $/ - \mathrm{s}/$ classes. The dichotomy between just one highly productive class in English and several productive classes in German may explain the first observation: if unidirectional LSTMs overfit to the frequent class, then they might appear to correlate better in English, but not German. However, this would not explain the German class correlations for infrequent inflections, which could be explored in future work. + +German Model Productions The model production counts in Rhyme versus Non-Rhyme contexts were important for the conclusion in M&al. that BiLSTMaTtn is not a good model of human behavior. We thus investigate this in Figure 2. + +Most of the criticisms from M&al. apply to the productions in our experiments as well. One new observation is that, without attention, LSTMs pre + +
regirreg/-(e)n//-e//-∅//-er//-s/
r0.44-0.310.010.800.730.700.83
+ +Table 4: Pearson $r$ between model acc. (or F1),and correlation with human ratings within infl. class $\left( {\mathrm{n} = 5}\right)$ . + +
BABNUAUNTrm
r-0.57-0.33-0.37-0.39-0.38
+ +Table 5: Pearson $r$ between model acc. (or F1),and correlation with human ratings within model $\left( {\mathrm{n} = 7}\right)$ . + +dict many "other" forms for NR contexts, but not for R. This likely means that Non-Rhymes lead to decoding errors for these models due to the unfamiliar context. Additionally, despite several behaviors that differ from humans in the Transformer productions, its second most produced inflection class is $\left\lnot \left( \mathrm{e}\right) \mathrm{n}/\right\rangle$ ,like humans,and unlike any LSTM model. The right side of Figure 2 instead displays the average model rating of each inflection class, on which we base our correlations in Tables 1 and 3. + +The average model rating of an inflection class represents the probability assigned to it averaged over all 10 randomly initialized models and all 24 lemmas. The $/ - \mathrm{e}/$ inflection accounts for a much smaller amount of the probability mass on average than its production probability. The preference for $/ - \mathrm{e}/$ in the NR context, which diverges from human ratings, is smaller by this metric for the Transformer and LSTMs with attention. Furthermore, $/ - (\mathrm{e})\mathrm{n}/$ has a more reasonable average probability for most models when compared to the human ratings in M&al., despite the preference for Rhymes, which diverges from human behavior. However, for $/ - \mathrm{s}/$ the Transformer shows a much higher average probability for Non-Rhymes than for Rhymes, which is more in line with human ratings. + +Overall, this means model ratings of German noun plurals look more similar to human ratings than model productions do to human productions. The Transformer is a better account for human behavior than the LSTM, though it still diverges in some ways. Dankers et al. (2021) warned that the /-s/ behavior may be explainable by a simple heuristic though, so this behavior may not actually indicate cognitive plausibility. + +Accuracy vs. Correlation The task of predicting the most likely inflection for an unknown word (measured by accuracy or F1) is not the same as rating multiple inflections (measured by Spearman's + +$\rho$ ). We thus investigate the relationship between these two tasks by measuring Pearson's $r$ between them to see if better inflection models in terms of accuracy are also more human-like. First, we consider the relationship for all models and inflection classes in both datasets and find no correlation ( $r = -0.17$ , $n = 35$ ). However, some inflection classes or models may behave differently than others. We refer to Table 5 to investigate this relationship within each architecture. In Table 4, we check the correlation within each inflection class. There is not sufficient data to draw statistically significant conclusions in either case, but the correlations that we report can still characterize the relationship in our experiments. We find that all architectures show a negative correlation. This implies that models are more accurate for inflection classes on which they correlate poorly with humans, and vise versa. However, Table 4 shows that all German inflection classes have a positive correlation between the two metrics, with the exception of $/-(e)n/$ . This is likely because $/-e(n)/$ is highly frequent in the training set, but is less suitable for the monosyllabic, neuter wugs. Neither English inflection class shows a strong relationship, though. + +# 6 Conclusion + +We ask which neural architecture most resembles human behavior in a wug test. We introduce results on a wider range of architectures than previous work and find that the Transformer, a state-of-the-art model for morphological inflection, frequently correlates best with human wug ratings. Despite this, a closer look at model ratings and productions on German plural inflection shows that neither model closely resembles human behavior. We also find that, while attention is crucial for LSTM inflection accuracy, it does not always lead to higher correlations with humans. Additionally, the often less accurate unidirectional model sometimes correlates better than its bidirectional counterpart, especially in the case of infrequent German plural classes. Finally, while for some inflection classes more accurate models correlate better with humans, there is no clear relationship between the two metrics overall. Future work might consider behavior when hyperparameters are tuned to maximize plausibility of the probability distribution rather than accuracy. Additionally, these results motivate a closer look at the effect of LSTM encoder directionality with respect to inflection class frequency. + +# Limitations + +This work is limited by the scope of languages and inflection categories that our models are tested on. We present results for two specific inflection categories in two languages. Previously, McCurdy et al. (2020b) ran experiments on neural network behavior for the German plural wugs used here, which brought into question some of the conclusions found in prior work for English past tense inflection. We thus believe that expanding this work to new inflection phenomenon and new languages may introduce results where the findings here do not necessarily hold. + +# Acknowledgments + +We would like to thank Kate McCurdy and Yohei Oseki for their input to and feedback on early stages of this work. We would also like to thank the anonymous reviewers, Abteen Ebrahimi, and Ananya Ganesh for their feedback on drafts of this paper. This research was supported by the NSF National AI Institute for Student-AI Teaming (iSAT) under grant DRL 2019805. The opinions expressed are those of the authors, and do not represent views of the NSF. + +# References + +Adam Albright and Bruce Hayes. 2003. Rules vs. analogy in english past tenses: A computational/experimental study. Cognition, 90(2):119-161. +R Harald Baayen, Richard Piepenbrock, and Leon Gulikers. 1996. The celex lexical database (cd-rom). +Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR. +Jean Berko. 1958. The child's learning of English morphology. Word, 14(2-3):150-177. +Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724-1734, Doha, Qatar. Association for Computational Linguistics. +Harald Clahsen, Monika Rothweiler, Andreas Woest, and Gary F. Marcus. 1992. Regular and irregular inflection in the acquisition of German noun plurals. Cognition, 45(3):225-255. + +Maria Corkery, Yevgen Matushevych, and Sharon Goldwater. 2019. Are we there yet? encoder-decoder neural networks as cognitive models of english past tense inflection. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3868-3877. +Ryan Cotterell, Christo Kirov, John Sylak-Glassman, David Yarowsky, Jason Eisner, and Mans Hulden. 2016. The SIGMORPHON 2016 shared Task—Morphological reinflection. In Proceedings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 10-22, Berlin, Germany. Association for Computational Linguistics. +Verna Dankers, Anna Langedijk, Kate McCurdy, Adina Williams, and Dieuwke Hupkes. 2021. Generalising to german plural noun classes, from the perspective of a recurrent neural network. In Proceedings of the 25th Conference on Computational Natural Language Learning, pages 94-108. +Rainer Goebel and Peter Indefrey. 2000. A recurrent network with short-term memory capacity learning the german-s plural. Models of language acquisition: Inductive and deductive approaches, pages 177-200. +Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780. +Katharina Kann and Hinrich Schütze. 2016. Single-model encoder-decoder with explicit morphological representation for reinflation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 555-560, Berlin, Germany. Association for Computational Linguistics. +Christo Kirov and Ryan Cotterell. 2018. Recurrent neural networks in linguistic theory: Revisiting pinker and prince (1988) and the past tense debate. Transactions of the Association for Computational Linguistics, 6:651-665. +Gary F Marcus, Ursula Brinkmann, Harald Clahsen, Richard Wiese, and Steven Pinker. 1995. German inflection: The exception that proves the rule. Cognitive psychology, 29(3):189-256. +Arya D. McCarthy, Christo Kirov, Matteo Grella, Amrit Nidhi, Patrick Xia, Kyle Gorman, Ekaterina Vylomova, Sabrina J. Mielke, Garrett Nicolai, Miikka Silfverberg, Timofey Arkhangelskiy, Nataly Krizhanovsky, Andrew Krizhanovsky, Elena Klyachko, Alexey Sorokin, John Mansfield, Valts Ernstreits, Yuval Pinter, Cassandra L. Jacobs, Ryan Cotterell, Mans Hulden, and David Yarowsky. 2020. UniMorph 3.0: Universal Morphology. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 3922-3931, Marseille, France. European Language Resources Association. + +Kate McCurdy, Sharon Goldwater, and Adam Lopez. 2020a. Inflecting when there's no majority: Limitations of encoder-decoder neural networks as cognitive models for german plurals. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1745-1756. +Kate McCurdy, Adam Lopez, and Sharon Goldwater. 2020b. Conditioning, but on which distribution? grammatical gender in German plural inflection. In Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics, pages 59-65, Online. Association for Computational Linguistics. +Danny Merkx and Stefan L Frank. 2020. Human sentence processing: Recurrence or attention? arXiv preprint arXiv:2005.09471. +Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 8024-8035. Curran Associates, Inc. +Steven Pinker and Alan Prince. 1988. On language and connectionism: Analysis of a parallel distributed processing model of language acquisition. Cognition, 28(1-2):73-193. +Sandeep Prasada and Steven Pinker. 1993. Generalisation of regular and irregular morphological patterns. Language and cognitive processes, 8(1):1-56. +David E Rumelhart and James L McClelland. 1985. On learning the past tenses of english verbs. Technical report, CALIFORNIA UNIV SAN DIEGO LA JOLLA INST FOR COGNITIVE SCIENCE. +Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. Advances in neural information processing systems, 27. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS. +Shijie Wu, Ryan Cotterell, and Mans Hulden. 2021. Applying the transformer to character-level transduction. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1901-1907, Online. Association for Computational Linguistics. + +
ModelHyperparams.
BiLSTMAttn0.93M
BiLSTMNoAttn0.90M
UniLSTMAttn0.56M
UniLSTMNoAttn0.54M
Transformer7.41M
+ +Table 6: Number of parameters in each model. + +# A Individual Model Variance + +In figure A.2, we show the variance, via boxplots, when correlating with human ratings. Models typically have higher correlations with ratings than with production probabilities, but the two are linearly related in our results. Similar to the findings of C&al., who compared to production probabilities, we find that individual BiLSTMAattn models vary quite a bit with respect to correlation with humans. For English, some models vary far less, for example BiLSTMAattn has a much lower variance with respect to both regulars and irregulars than BiLSTMAattn. Similarly, the Transformer often correlates the same across different random initializations, with the exception of a few outliers. Turning to the German boxplots in A.2b, we see similarly low variance for the transformers, and typically higher variance for most LSTMs. For architectures that vary more, i.e. LSTMs, we often see a higher correlation when the ratings are first averaged (as reported in Table 1 and 3), but the same is often not true for English. + +![](images/c1af12c9e659cece4923d357d71183c94f91b04894c126c96a1e818be2da5d96.jpg) +Figure A.1: English past tense productions (left) and average probability (right) for each architecture for all lemmas and all random initializations. + +![](images/d8f9a333dc7f7ea52b84ce6bf221fc95e5683bd542ce91f1f8becfeefcdd97ce.jpg) + +![](images/013bd366c48672d1edf5eebf0498d2bda08f640593f6f86fe42f212d63c81f5d.jpg) +(a) English past tense + +![](images/c27cd44510b894bd8c770b17bc6e41bd92ee8018f89897fbe7f23f33413ff027.jpg) +(b) German plural +Figure A.2: Boxplots of Spearman's correlation for individual models with respect to average human ratings \ No newline at end of file diff --git a/acomprehensivecomparisonofneuralnetworksascognitivemodelsofinflection/images.zip b/acomprehensivecomparisonofneuralnetworksascognitivemodelsofinflection/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..fd7114384ea0df180d0f774c02a226bbb0548e6d --- /dev/null +++ b/acomprehensivecomparisonofneuralnetworksascognitivemodelsofinflection/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b0c1f827dd2826243b1df0864dfeddd93e495bb45ef0d67e473d2e2b090a7a1d +size 327324 diff --git a/acomprehensivecomparisonofneuralnetworksascognitivemodelsofinflection/layout.json b/acomprehensivecomparisonofneuralnetworksascognitivemodelsofinflection/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..d8f88a368ecd8082227a1bd9e8b2625ed00cf9ce --- /dev/null +++ b/acomprehensivecomparisonofneuralnetworksascognitivemodelsofinflection/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:003260bc3c0f2ca4dcdab28d866adc08e26afe088748b3dbfbe919c49ab2aba9 +size 341026 diff --git a/activeexampleselectionforincontextlearning/7df1d58e-95f9-4891-9f80-ce9df75cf31a_content_list.json b/activeexampleselectionforincontextlearning/7df1d58e-95f9-4891-9f80-ce9df75cf31a_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..56193f3f82b9ba1940bb08e837a0b5d2352aff5f --- /dev/null +++ b/activeexampleselectionforincontextlearning/7df1d58e-95f9-4891-9f80-ce9df75cf31a_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3bf7b144a8c50cf3b7bb42433c9fb0c988478ad36d2a3941dd50af3a899e4991 +size 99264 diff --git a/activeexampleselectionforincontextlearning/7df1d58e-95f9-4891-9f80-ce9df75cf31a_model.json b/activeexampleselectionforincontextlearning/7df1d58e-95f9-4891-9f80-ce9df75cf31a_model.json new file mode 100644 index 0000000000000000000000000000000000000000..6245e579b6420d395948eaba132971aa81c211c3 --- /dev/null +++ b/activeexampleselectionforincontextlearning/7df1d58e-95f9-4891-9f80-ce9df75cf31a_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:788c23c3b54fbd283d41870c09414bbdf29c8ef0c9c7d0c5a5ee36bb7077dce2 +size 119196 diff --git a/activeexampleselectionforincontextlearning/7df1d58e-95f9-4891-9f80-ce9df75cf31a_origin.pdf b/activeexampleselectionforincontextlearning/7df1d58e-95f9-4891-9f80-ce9df75cf31a_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..5da86c2c71159c0ae8133f4b331633e252203bea --- /dev/null +++ b/activeexampleselectionforincontextlearning/7df1d58e-95f9-4891-9f80-ce9df75cf31a_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d90b0909059a960db3217f130f062f29b2caee72d70ab6e17bf87d74ffed71ae +size 545424 diff --git a/activeexampleselectionforincontextlearning/full.md b/activeexampleselectionforincontextlearning/full.md new file mode 100644 index 0000000000000000000000000000000000000000..1b9c750325cab9306e835668129e76e52a54561f --- /dev/null +++ b/activeexampleselectionforincontextlearning/full.md @@ -0,0 +1,402 @@ +# Active Example Selection for In-Context Learning + +Yiming Zhang and Shi Feng and Chenhao Tan {yimingz0, shif, chenhao}@uchicago.edu University of Chicago + +# Abstract + +With a handful of demonstration examples, large-scale language models show strong capability to perform various tasks by in-context learning from these examples, without any fine-tuning. We demonstrate that in-context learning performance can be highly unstable across samples of examples, indicating the idiosyncrasies of how language models acquire information. We formulate example selection for in-context learning as a sequential decision problem, and propose a reinforcement learning algorithm for identifying generalizable policies to select demonstration examples. For GPT-2, our learned policies demonstrate strong abilities of generalizing to unseen tasks in training, with a $5.8\%$ improvement on average. Examples selected from our learned policies can even achieve a small improvement on GPT-3 Ada. However, the improvement diminishes on larger GPT-3 models, suggesting emerging capabilities of large language models. + +# 1 Introduction + +Large language models demonstrate the capability to learn from just a few examples (Radford et al., 2019; Brown et al., 2020; Rae et al., 2022; Zhang et al., 2022). The possibility to train a model without any parameter update has inspired excitement about the in-context learning paradigm. + +Intuitively, high in-context learning performance should require carefully chosen demonstration examples, but a recent line of work suggests otherwise — that demonstration examples are not as important as we expected, and that few-shot performance can be largely attributed to the model's zero-shot learning capacity (Min et al., 2022), across GPT-2 and GPT-3. This insight is corroborated by a parallel line of work that brings significant improvements to in-context learning performance without example selection, for example, by reordering randomly selected examples and using + +calibration (Lu et al., 2022; Zhao et al., 2021; Kojima et al., 2022). Another notable approach is to use best-of- $n$ sampling, which requires a labeled set for validation (Nakano et al., 2022). + +Our contribution in this paper is twofold. First, we revisit the effect of example selection on in-context learning. We show that even with reordering and calibration, we still observe a large variance across sets of demonstration examples, especially for GPT-2, while calibration reduces the variance for GPT-3 models. The high variance needs further investigation, as we take it as evidence that large language models are still not capable of efficiently and reliably acquire new information in-context. Understanding what makes good demonstration examples sheds some light on the mechanisms that large language models use to process information. + +Second, we seek to discover general trends in example selection for in-context learning across different tasks. Concretely, we use reinforcement learning to optimize example selection as sequential decision making problem. We argue that active example selection from unlabeled datasets is the most appropriate setting for in-context learning because fine-tuning with an existing labeled set leads to great performance with low variance. For GPT-2, we validate our learned policy on a seen task with labeled dataset and observe a $12.1\%$ improvement over a max-entropy active learning baseline. Moreover, our learned policy is able to generalize to new tasks with $5.8\%$ improvement, suggesting that the policy is able to capture systematic biases in how GPT-2 acquires information. Examples selected from our learned policies can even achieve a small improvement on GPT-3 Ada. However, the improvement diminishes on larger GPT-3 models. We provide further analyses to understand the properties of useful examples. + +Overall, our work explores how large language models process information through the perspective of example selection and formulate active ex + +ample selection as a sequential decision making problem. We investigate divergent behaviors between GPT-2 and GPT-3, which echoes the emerging abilities of large language models, and suggest that researchers in the NLP community should collectively build knowledge and research practice in the era of large language models. $^{1}$ + +# 2 The Effect of Example Selection + +In this section, we demonstrate the instability of incontext learning performance due to the selection of demonstration examples. We further show that existing methods (e.g., calibration, reordering) are insufficient for addressing this stability for GPT-2. In comparison, the variance of GPT-3 models can be mitigated with calibration. + +# 2.1 In-context Text Classification with Demonstration Examples + +We start by formally defining in-context learning. We focus on in-context learning for text classification with a left-to-right language model. All supervision is given through a "prompt" which we denote as $s$ . The prompt typically contains natural language instructions and a few demonstration examples. To make a prediction for a test example $x$ , we concatenate the prompt and the test example as prefix, and use the language model to predict the next token: $\arg \max_y \mathbf{P}_{\mathrm{LM}}(y|s + x)$ , where $+$ denotes concatenation. Typically, instead of taking the arg max from the whole vocabulary, we restrict the model's output to a set of special tokens which corresponds to the set of labels, e.g., with the word "positive" corresponding to the positive class in binary sentiment classification. In our formulation, we omit a separate variable for the special tokens, and use $\mathcal{V}$ to refer to both the label set and the set of proxy tokens for simplicity. + +To summarize, a prompt in this paper is a sequence of $k$ labeled examples concatenated together: $s = (x_{1},y_{1}),(x_{2},y_{2}),\ldots ,(x_{k},y_{k})$ . And the prediction for a test input $x$ is the label with the highest likelihood of being by the language model: $\arg \max_{y\in \mathcal{V}}\mathbf{P}_{\mathrm{LM}}(y|s + x)$ .2 + +Experiment setup. Following Zhao et al. (2021), we conduct our experiments on AGNews (Zhang + +
DatasetDomain#classesavg. length
AGNewsTopic cls.437.8
AmazonSentiment cls.278.5
SST-2Sentiment cls.219.3
TRECQuestion type cls.610.2
+ +Table 1: Dataset information. + +![](images/e320e6293f5eec8c37d442d7894dee0edcd404a97964dd200a24a374ff2ecab8.jpg) +Figure 1: Zero-centered in-context learning accuracy of GPT-2 on 30 random sets of 4 demonstration examples. Each dot indicates performance of the best permutation for one set of demonstration examples. $y$ -axis represents the accuracy difference with the mean accuracy of random demonstration examples. + +et al., 2015), SST-2 (Socher et al., 2013) and TREC (Voorhees and Tice, 2000). We additionally include Amazon (Zhang et al., 2015) since it contains longer texts than the remaining datasets. Table 1 give basic information of the tasks. + +Using GPT-2 345M (GPT-2), GPT-3 Ada (ADA) and GPT-3 Babbage (BABBAGE) as the in-context learning models, we report 4-shot example selection performance across all experiments. + +# 2.2 Sensitivity to Example Selection + +We first highlight the sensitivity of GPT-2 due to example selection. In Figure 1, we plot the in-context learning performance of 30 random sequences of demonstration examples with length 4. Across all 4 tasks, the maximum and minimum performance due to random sampling differs by $>30\%$ . Additionally, for 3 out of the 4 tasks (AGNews, SST-2 and TREC), performance of the worst set of demonstration examples lead to in-context learning performance below random guessing (e.g., it is $10.0\%$ on TREC, below $16.7\%$ accuracy of guessing randomly among 6 labels in TREC). + +Reordering sequence alone cannot address the instability. Lu et al. (2022) identifies the ordering of demonstration examples as the cause for variance, and proposed heuristics to reorder demonstra + +![](images/596802e7ce3dc8a8f4abaf0fa97beff2d43ab57cba3078587042479100004ebf.jpg) +Figure 2: In-context learning accuracy of 30 random sets of 4 demonstration examples with calibration. Each dot indicates performance of the best permutation for one set of demonstration examples. Accuracy over random examples (no calibration) is plotted. + +tion examples. For such an approach to be effective, the underlying assumption is that there exists good orderings for most sets of demonstration examples. + +In Figure 1, we additionally report the highest possible performance among $4! = 24$ permutations for each of the 30 sets using a validation set of 100 examples. The reordering performance reported here is highly optimistic for a true few-shot setting (Perez et al., 2021) since a validation set cannot be assumed available. As expected, taking the best permutation on a validation set improves test performance: we observe an average of $8.1\%$ increase on average over random demonstration examples. + +However, these best orderings of examples still lead to a wide range of possible performance. On AGNews, we observe a maximum accuracy of $79.6\%$ and a minimum accuracy of $32.7\%$ after considering the best possible orderings. On TREC, the best ordering for 9 out of 30 sets of examples lead to performance below random examples. These observations suggest that there are simply no good orderings for considerable proportions of demonstration sets, motivating the need for selecting examples beyond merely reordering. + +Calibration does not decrease variance for GPT-2, either. Zhao et al. (2021) finds that language models are poorly calibrated when used directly as in-context classifiers, and argues that calibration is the key missing piece to improve and stabilize in-context learning performance. It proposes using dummy examples (e.g., "N/A") as anchors for calibrating the language model since a calibrated language model should make neutral predictions for these content-free examples. + +Figure 2 demonstrates the effectiveness of cali + +
ModelAGNewsAmazonSST-2TREC
GPT-244.59.387.53.761.714.429.412.8
GPT-2 (C)55.212.076.314.066.214.740.85.4
ADA62.917.587.06.165.010.221.26.6
ADA (C)64.04.090.01.273.89.722.15.3
BABBAGE68.014.093.40.892.22.727.45.8
BABBAGE (C)78.16.192.71.690.81.136.04.0
+ +Table 2: Performance of GPT-2, ADA and BABBAGE across 5 random sets of 4-shot demonstration examples. C indicates calibration. Standard deviation is reported as subscripts. + +bration in improving few-shot performance. With calibration, we observe an increase in average performance of varying magnitude on 3 out of the 4 tasks (AGNews, SST-2 and TREC), but a marginal decrease of performance on Amazon. For example, on AGNews where calibration improves performance the most, we observe a maximum accuracy of $79.5\%$ and a minimum accuracy of $26.1\%$ , resulting in a gap of over $53.4\%$ . + +Interestingly, we observe varying behavior when combining calibration with demonstration reordering. On the binary tasks (Amazon and SST-2), we observe prompt reordering to be quite effective, consistently leading to performance above random examples. On the other hand, for AGNews (4 labels) and TREC (6 labels), we observe much greater variance. + +In summary, with GPT-2, existing methods do not provide satisfactory solutions to the sensitivity of in-context learning to demonstration examples. Reordering demonstration requires a well-behaving demonstration set, which is often not the case, and does not reduce variance. Calibration, though improves performance, does not reduce variance, and its effectiveness deteriorates with a large label set. These findings motivate the need for identifying high quality demonstration examples for consistent and performant in-context learning. + +Variance persists to some degree with GPT-3. In Table 2, we report the performance of GPT-2, ADA and BABBAGE on 5 random sets of demonstration examples.3 GPT-3 models are not immune to instability due to resampling demonstration examples. On multi-labeled tasks including AGNews and TREC, we observe both ADA and BABBAGE demonstrate significant variance, and on binary + +tasks such as Amazon and SST-2, much smaller variance is observed. This difference is potentially due to the difficulty of the task and the multi-class nature of AGNews and TREC. We will address the latter in §4.3. Another interesting observation is that variance diminishes with calibration. However, one may argue that calibration no longer reflects the model's innate ability to acquire information. + +Overall, the differences in model behavior between GPT-2 and GPT-3 add evidence to the emergent ability of large language models (Wei et al., 2022; Bowman, 2022). We hypothesize that the variance will be even smaller with GPT-3 Davinci. + +# 3 Active Example Selection by RL + +Given a set of unlabeled examples, can we choose the right ones to be annotated as demonstration examples? In this section, we formulate the problem of active example selection for in-context learning. Following the definition of in-context learning in §2.1, constructing a prompt for in-context learning boils down to choosing a sequence of demonstration examples. + +We emphasize that by selecting from unlabeled examples, our setup is analogous to active learning, where we select examples to label. We think that this is the most appropriate setting for in-context learning because fine-tuning can lead to great performance with low variance if we already have a moderately-sized labeled set (e.g., 100 instances). + +As in-context learning uses a small number of examples, we formulate active example selection as a sequential decision making problem, where prompt is constructed by selecting and annotating one demonstration example at a time. We use a Markov Decision Process (MDP) to formalize the problem, discuss our design of the reward function, and introduce our solution to example selection using reinforcement learning (RL). + +# 3.1 Active Example Selection as a MDP + +Given a set of unlabeled examples, we want to maximize the expected accuracy on unseen test examples by getting up to $k$ annotations. The space of possible prompts grows exponentially with the number of unlabeled example and is intractable to enumerate, so we treat it as a sequential decision making problem: given the pool of unlabeled examples $\mathbf{S}_{\mathcal{X}} = \{x_i\}$ , choose one example $x_{i}$ , obtain its groundtruth label $y_{i}$ , append the pair $(x_{i},y_{i})$ to our prompt, and repeat this process until either the + +budget $k$ is exhausted or the policy takes a special action $\bot$ indicating early termination. + +Action space and state space. The action space of the MDP is the set of unlabeled examples plus the special end-of-prompt action: $\mathcal{A} = \mathbf{S}_{\mathcal{X}}\cup \{\bot \}$ . After choosing an action $x_{i}$ we observe its label $y_{i}$ , and the state is defined by the prefix of the prompt $s = (x_{1},y_{1}),(x_{2},y_{2}),\ldots ,(x_{i},y_{i})$ + +Reward. The reward $r$ can be defined based on an arbitrary scoring function $f$ of the language model LM when conditioned on the prompt $s$ , denoted $r = f(\mathrm{LM}_s)$ . In practice, we use the accuracy on a labeled validation set as reward. + +It follows that we need to have access to a validation set during training, which we refer to as reward set. Similarly, we also have a labeled set from which our policy learns to select examples. We refer to this labeled set as training set. Ideally, our learned policies identify generalizable qualities of demonstration examples and can select useful unlabeled examples in a task where the policy has not observed any labeled examples. We will explore different setups to evaluate our learned policies. + +It is useful to emphasize how active example selection deviates from the standard reinforcement learning setting. First, the action space is the examples to be selected, which can be variable in size. Furthermore, the actions during test time can be actions that the policy has never observed during training. Similarly, the classification task can differ from training, analogous to a new environment. Such generalizations are not typically assumed in reinforcement learning, due to the challenging nature of the problem (Kirk et al., 2022). + +# 3.2 Active Example Selection by Q-learning + +Framing active example selection as a sequential problem allows us to use off-the-shelf RL algorithms to train a policy. We opt to use Q-learning (Mnih et al., 2013) for its simplicity and effectiveness. + +The objective of Q-learning is to approximate the optimal state-value function $Q^{\star}(s,a)$ , i.e., the maximum (discounted) future reward after taking action $a$ in state $s$ . The Bellman equation (Bellman, 1957) allows a recursive formulation of the optimal state-value function $Q^{\star}$ as + +$$ +Q ^ {\star} (s, a) = \mathbb {E} _ {s \sim \mathcal {S}} \left[ r (s, a) + \gamma \max _ {a ^ {\prime}} Q ^ {\star} (s ^ {\prime}, a ^ {\prime}) \right]. +$$ + +We collect off-policy training data in our implementation and thus use offline Q-learning to lever + +age off-policy data (Prudencio et al., 2022). Specifically, We use conservative Q-learning (CQL) (Kumar et al., 2020), which uses regularization to prevent the overestimation of Q-values for unobserved actions in training data, contributing to a robust policy when evaluated in an unfamiliar environment. More details about CQL can be found in the Appendix A. + +Generation of off-policy data. Offline learning requers off-policy training data. We run a random policy for a fixed number (2,000) of episodes to create the off-policy data. For every episode, we randomly sample 4 demonstration examples, and compute features and intermediate rewards. Then, we store the trajectory as training data. + +Feature-based representation of actions. In our framework, a state $s$ is a sequence of examples, and we simply use the number of already selected examples $|s|$ as the feature representation. To enable our method to be deployed in an active example selection process, we assume no access to labels prior to selecting an example. That is, when representing a example to be selected $a = (x,y)$ , we omit the label $y$ and simply use predicted label probabilities conditioned on the current examples $\mathbf{P}_{\mathrm{LM}}(\cdot | s + x)$ . We additionally include entropy of the prediction. + +Reward shaping. The previously defined reward function only rewards a completed prompt, while intermediate states receive zero reward. Sparse reward schemes are known to make learning difficult (Pathak et al., 2017). Therefore, we propose an alternative reward function based on the marginal utility of actions (Von Wieser, 1893). At time step $t$ we define $r: S \times \mathcal{A} \to \mathbb{R}$ as + +$$ +r (s, a) = f \left(\mathrm {L M} _ {s + a}\right) - f \left(\mathrm {L M} _ {s}\right). +$$ + +Intuitively, $r$ measures the "additional gain" on objective $f$ by acquiring the label of example $a$ . Notice that $f(\mathrm{LM}_{\emptyset})$ can be conveniently interpreted as the zero-shot performance of the language model. Maximizing this marginal utility reward function is indeed equivalent to optimizing the true objective $f$ : observe that the summation of rewards along a trajectory is a telescoping series, leaving only the final term $f(\mathrm{LM}_{s_{\perp}})$ minus a constant term that does not affect the learned policy. It turns out + +that $r$ is a shaped reward (Ng et al., 1999), a family of transformed reward functions that preserves the invariance of optimal policies. + +Target network with replay buffer. Our algorithm uses separate policy and target networks (Hasselt, 2010) with a replay buffer (Lin, 1992). Both are standard extensions to vanilla DQN (Arulkumaran et al., 2017), and are demonstrated to improve performance while alleviating certain optimization issues (Hessel et al., 2017). After concatenating state and action representations, we use a 3-layer MLP as the Q-network: $\hat{Q}(s,a) = \mathrm{MLP}([s\parallel a])$ . We report hyperparameters details in Appendix B. + +# 4 Results + +In this section, we investigate the performance of our learned policies for GPT-2. Due to the significant costs of generating episodes, we only apply the policies learned from GPT-2 and examine direct transfer results on GPT-3. Baselines, oracles and our method have access to the same underpinning calibrated GPT-2 model. + +# 4.1 Setup + +Following our framework in §3, during training, we use a training set from which the trained policy picks 4 examples for demonstration, as well as a reward set, which is a validation set where we compute rewards for the learning agent. Each set has 100 examples and our training scheme uses a total of 200 examples. + +Depending on the availability of a reward set, we consider three evaluation settings: + +- SEEN EXAMPLES, SAME TASK. In this setting, we use the learned policy to pick demonstration examples from the training set. We expect our method to be competitive with oracle methods that select examples based on rewards. +- NEW EXAMPLES, SAME TASK. We consider a more challenging setting where the learned policy picks from an unlabeled set of 100 or 1000 previously unseen examples. The learned policy still benefits from access to the reward set during training as the classification task is the same, but it cannot perform well simply by memorizing good sequences. +- NEW EXAMPLES, NEW TASK. Finally, we ask the learned policy to pick examples on a new task that it has never seen. Specifically, we adopt a multi-task learning approach, allowing the policy + +
MethodAverageAGNewsAmazonSST-2TREC
random59.655.210.576.312.366.212.940.84.7
max-entropy59.358.811.374.85.165.710.737.86.7
reordering63.563.36.889.83.867.911.133.04.2
best-of-1072.572.11.991.10.681.14.445.63.5
greedy-oracle78.080.61.791.81.181.73.958.07.5
our method (seen examples)71.470.87.890.41.981.03.543.32.0
our method (100 new examples)71.671.37.489.23.981.82.644.04.6
our method (1000 new examples)69.065.57.488.54.276.77.545.45.0
+ +Table 3: SAME TASK accuracy on AGNews, Amazon, SST-2 and TREC, across 5 random seeds. $95\%$ confidence intervals are reported as subscripts. + +to simultaneously learn from all but one tasks. Then, we evaluate the held-out task (e.g., train on AGNews, SST-2, TREC and test on Amazon). The learned policies use 600 examples from training $(3 \times 100$ each for the training set and reward set). During evaluation, the policy picks examples from an unlabeled set of examples in the held-out task, and we experiment with either 100 or 1000 unlabeled examples. + +SEEN EXAMPLES, SAME TASK and NEW EXAMPLES, SAME TASK serve as sanity check of our learned policies, while NEW EXAMPLES, NEW TASK is the most appropriate setting for evaluating in-context learning. + +Baselines and oracles. We consider three baseline methods for example selection. The random strategy simply picks demonstration examples randomly. Our second baseline (max-entropy) is a standard approach in active learning (Settles, 2009; Dagan and Engelson, 1995) which greedily picks the example maximizing classification entropy. We additionally consider a strong example reordering heuristic by Lu et al. (2022), dubbed reordering; reordering first uses the language model to generate a set of fake examples that resemble demonstration, and then chooses an ordering that maximizes classification entropy on these fake examples. Intuitively, max-entropy and reordering both encourages class balance during prediction. All three baselines can be used in active example selection, namely, example selection that does not have label access to examples before they are selected. + +We further consider two oracle methods that require a labeled candidate set and a reward set. The best-of-10 strategy randomly samples 10 times and + +keeps the sample that maximizes performance on the reward set as the final demonstration sequence. In addition, we use a greedy strategy to iteratively choose the example that results in the highest performance on the reward set, and we refer to this strategy as greedy-oracle. The oracles do not work for active example selection and cannot be used in NEW TASK as the assumption is that we do not have any labeled examples, so we do not compare our learned policies with oracles in NEW TASK. + +We use baselines and our methods to select 4 demonstration examples for every task, and we average model performances across 5 random runs. + +# 4.2 Main results + +We analyze the effectiveness of applying our method in both SAME TASK and NEW TASK. + +SAME TASK. Our method evaluated by picking from seen examples demonstrates strong performance. Across all 4 tasks, our method outperforms random, max-entropy and reordering baselines by an average of $11.8\%$ , $12.1\%$ and $7.9\%$ , respectively, as well as $>10\%$ improvements on 2 tasks. + +Beyond performance gains, it is clear that our method helps reduce variance. We present $95\%$ confidence intervals as a proxy for variance. Across all 4 tasks, we observe consistent decrease in variance compared to the baselines. + +Picking from both 100 and 1000 new examples largely retains the performance gains and variance reductions. Interestingly, we notice a higher overall performance of picking from 100 over 1000 new examples. This can be attributed to the large variance (see Appendix C.1 for more results). + +Comparing with oracle methods, our methods perform relatively closely to best-of-10, while greedy-oracle significantly outperforms the other methods. Since we want the policies to learn generalizable example selection strategies, we intention + +
MethodAverageAGNewsAmazonSST-2TREC
random59.655.210.576.312.366.212.940.84.7
max-entropy59.358.811.374.85.165.710.737.86.7
reordering63.563.36.889.83.867.911.133.04.2
our method (100 examples)63.863.410.486.86.765.913.438.95.1
our method (1000 examples)65.466.75.789.91.661.97.743.34.4
+ +Table 4: New-task accuracy on AGNews, Amazon, SST-2 and SST-2, across 5 random seeds. $95\%$ confidence intervals are reported as subscripts. + +ally use simple features, which may explain why our method, even when picking from seen examples, does not outperform oracles. Thanks to the high variance of random sampling, best-of-10 is a very performant strategy despite its simplicity, and a reasonable choice if validation is possible. At the cost of an exponential runtime, greedy-oracle shows the strong in-context learning performance attainable with just example selection, motivating the framing of in-context learning optimization as a pure example selection problem. In fact, the average performance from greedy-oracle with GPT-2 (345M) is better than that of GPT-3 Curie, a 20x larger model (see Appendix C.2).7 + +NEW TASK. We further evaluate our methods under the new task setting, where we train the example selection policy on 3 tasks, and evaluate on a previously unseen task. On average, we observe a smaller, but still significant improvements over both random and max-entropy baselines, suggesting the existence of learnable insights about good demonstration examples that generalize across tasks. On the other hand, we observe limited gains over reordering, signifying the challenge of finding good examples in an unknown task. + +Interestingly, when picking from 1000 examples, we observe a much greater effect of variance reduction compared to baselines. In comparison, the variance reduction effect is minimal when picking from 100 examples and the performance gain is slightly smaller likely due to randomness. + +We continue this discussion on the effect of size of selection set on transfer performance in Appendix C.1. + +GPT-3 transfer. Training example selection policies directly on GPT-3 models is not viable since it requires sample a significant number of trajectories while computing rewards. Therefore, we instead + +evaluate if policies and examples trained on GPT-2 generalize to GPT-3. Overall, we find mixed transfer results. On the smaller GPT-3 ADA model, we observe small gains ( $\sim 1\%$ ) by transferring both policies and examples, which is impressive considering the architectural differences between GPT-2 and GPT-3. However, we observe mixed results in transfer to BABBAGE and CURIE. We report further details in Appendix C.2. + +# 4.3 What Makes Good Examples? + +To understand what makes good examples, we explore properties of the learned policy and design additional experiments based on our qualitative examination of the selected examples. In the interest of space, we focus on label balance and coverage, and present other results based on linear policies (C.3) and length (C.4) in the Appendix. + +On Amazon and SST-2, both binary sentiment classification tasks, we focus on label balance, measured by the number of positive labels in the demonstration set. For AGNews (4 labels) and TREC (6 labels), we instead focus on the distinct number of labels covered in demonstration. We present the results in Figure 3 and Figure 4. + +Perhaps surprisingly, a well-balanced demonstration set does not consistently lead to greater performance or less variance. In Amazon, we notice that having all 4 examples being positive actually leads to good in-context learning performance, with an average accuracy of $87.8\%$ and $4.5\%$ greater than that of a perfectly balanced demonstration set $(83.3\%)$ . A similar trend is demonstrated in SST-2, where having all positive or all negative labels leads to much smaller variance compared to more balanced sets, while outperforming perfectly balanced sets on average. + +In TREC, we again observe that the model does not need to observe the entire label space to perform well. The greatest performance occurs when + +![](images/3fbe1f04b3d2979657b2e1061b2b1dcf48e478cda07c5776c2063b71079dbc15.jpg) +(a) Amazon + +![](images/66aa0fd4148f4643258b9dff5aface1ac39b66b61cac942a73a1b1306c094dd7.jpg) +(b) SST-2 + +![](images/29937bcc2e0581fd703ba0533fbd6f3190a83ade11fb9fa1b14ce9b8810bd6b3.jpg) +Figure 3: Accuracies of Amazon and SST-2 with varying label balance (number of positive examples in demonstration), across 100 total random samples of 4 demonstration examples. +(a) AGNews + +![](images/4fb116e6f24bcd53479066be968ea37def8353659dba188d10787e6652432272.jpg) +(b) TREC +Figure 4: Accuracies of AGNews and TREC with varying label coverage (number of unique labels covered in demonstration), across 100 total random samples of 4 demonstration examples. Demonstration set that only covers 1 label is very unlikely and does not appear in our experiments. + +exactly two labels are covered by demonstration, and the performance deteriorates as label coverage increases. AGNews demonstrates a somewhat expected pattern. When 4 label are covered, we observe the best performance along with a small variance. That said, covering three labels does not improve over covering two labels. + +Overall, our analysis highlights the idiosyncrasies of how GPT-2 acquires information in in-context learning. The sequences that lead to strong performance may not align with human intuitions. + +# 5 Related Work + +Our paper builds on top of prior work that uses RL to solve the active learning problem (Fang et al., 2017; Liu et al., 2018), and is made possible by the recent advances in pre-trained language models (Devlin et al., 2019; Liu et al., 2019; Raffel et al., 2020; Gao et al., 2021). In-context learning, the observation that LMs (Radford et al., 2019; Brown et al., 2020; Rae et al., 2022; Zhang et al., 2022) can "learn" to perform a task when conditioned on a prompt. Xie et al. (2022) explains the + +emergence of in-context learning by inferring the shared latent concept among demonstration examples, while Min et al. (2022) finds the success of in-context learning is largely independent of access to gold labels. + +A variety of issues with in-context learning is discovered, including surface form competition, the phenomenon that multiple words referring to the same concept fighting for probability mass (Holtzman et al., 2021), and sensitivity of LMs due to changes in prompt (Lester et al., 2021), instruction (Mishra et al., 2022), or ordering of demonstration examples (Zhao et al., 2021; Lu et al., 2022). To optimize the performance of in-context learning, methods with varying levels of granularity are proposed. Such methods include prompt tuning (Lester et al., 2021; Vu et al., 2022; Wu et al., 2022), and instruction optimization (Mishra et al., 2022; Kojima et al., 2022). Liu et al. (2021) approaches the example selection problem by searching for nearest neighbors of test examples in the embedding space, while Rubin et al. (2022) uses a scoring LM for example retrieval. + +# 6 Discussion + +Inspired by Pang and Lee (2005), we adopt a Q&A format to discuss the implications of our work. + +Q: Are GPT-2 results still relevant? + +A: We believe that it is relevant for three reasons. First, GPT-2 is public and economically feasible options for many researchers. Our knowledge about GPT-2 is far from complete and expanding this understanding is useful on its own. Second, in the long term, it is unclear that everyone will have access to large models or that it is appropriate to use the largest model available in every use case. Models of moderate sizes are likely still useful depending on the use case. Third, it is important to highlight the emerging abilities over different sizes of language models. By understanding the phase change, i.e., when emerging abilities happen, we will better understand the behavior of large-scale language models. + +That said, one should caution against making generalizing claims based on results from GPT-2, because the results may not generalize to GPT-3 (Bowman, 2022). This is why we present negative results from GPT-3. Differing results between GPT-2 and GPT-3 or more generally models of different sizes will be a reality in NLP for a while. It is important for the NLP community to collectively build knowledge about such differences and develop the future ecosystem of models. + +Q: Why did you not experiment with GPT-3-Davinci? + +A: The goal of this work is twofold: 1) assessing the ability of large-scale language models to acquire new information and 2) exploring whether reinforcement learning can identify reliable strategies for actively selecting examples. Our results are generally positive on GPT-2. Meanwhile, we observe relatively small variance after calibration with GPT-3-Babbage, so it does not seem economically sensible to experiment with even bigger models. + +Q: Why did you choose $k = 4$ ? Is this generalizable? + +A: Our experiments are limited by the context window of GPT-2 (1024 tokens) and GPT-3 (2048) tokens. Using $k$ beyond 4 would frequently leads to demonstration examples overflowing the token limit and need to be truncated. Additionally, prior work (Zhao et al., 2021; Brown et al., 2020) shows diminishing improvements of in-context learning performance by adding the number of demonstration examples beyond 4. Therefore, we believe + +experimenting with $k = 4$ is a reasonable choice. We are optimistic that our framework and method can generalize to different shots. + +# 7 Conclusion + +In this work, we investigate how large language models acquire information through the perspective of example selection for in-context learning. In-context learning with GPT-2 and GPT-3 is sensitive to the selection of demonstration examples. In order to identify generalizable properties of useful demonstration examples, we study active example selection where unlabeled examples are iteratively selected, annotated, and added to the prompt. We use reinforcement learning to train policies for active example selection. The learned policy stabilizes in-context learning and improves accuracy when we apply it to a new pool of unlabeled examples or even completely new tasks unseen during training for GPT-2. Our analyses further reveal that properties of useful demonstration examples can deviate from human intuitions. + +Examples selected from GPT-2 can still lead to a small improvement on GPT-3 Ada, however, the gain diminishes on larger models (i.e., Babbage and Curie). Our results highlight the challenges of generalization in the era of large-scale models due to their emerging capabilities. We believe that it is important for the NLP community to collectively build knowledge about such differences and develop the future ecosystem of models together. + +# Ethics Statement + +Our primary goal is to understand how large language models acquire new information in in-context learning through the perspective of example selection. A better understanding can help develop more effective strategies for in-context learning as well as better large-scale language models. However, these strategies can also be used in applications that may incur harm to the society. + +# Acknowledgments + +We thank all anonymous reviewers for their insightful suggestions and comments. We thank all members of the Chicago Human+AI Lab for feedback on early versions of this work. This work was supported in part by an Amazon research award, a Salesforce research award, a UChicago DSI discovery grant, and an NSF grant IIS-2126602. + +# References + +Kai Arulkumaran, Marc Peter Deisenroth, Miles Brundage, and Anil Anthony Bharath. 2017. A Brief Survey of Deep Reinforcement Learning. IEEE Signal Processing Magazine, 34(6):26-38. +Richard Bellman. 1957. Dynamic Programming, first edition. Princeton University Press, Princeton, NJ, USA. +Samuel Bowman. 2022. The dangers of underclaiming: Reasons for caution when reporting how NLP systems fail. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7484-7499, Dublin, Ireland. Association for Computational Linguistics. +Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language Models are Few-Shot Learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877-1901. Curran Associates, Inc. +Ido Dagan and Sean P. Engelson. 1995. Committee-based sampling for training probabilistic classifiers. In Proceedings of the Twelfth International Conference on International Conference on Machine Learning, ICML'95, pages 150-157, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. +Meng Fang, Yuan Li, and Trevor Cohn. 2017. Learning how to Active Learn: A Deep Reinforcement Learning Approach. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 595-605, Copenhagen, Denmark. Association for Computational Linguistics. +Tianyu Gao, Adam Fisch, and Danqi Chen. 2021. Making Pre-trained Language Models Better Few-shot Learners. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3816-3830, Online. Association for Computational Linguistics. + +Hado Hasselt. 2010. Double Q-learning. In Advances in Neural Information Processing Systems, volume 23. Curran Associates, Inc. +Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, and David Silver. 2017. Rainbow: Combining Improvements in Deep Reinforcement Learning. +Ari Holtzman, Peter West, Vered Shwartz, Yejin Choi, and Luke Zettlemoyer. 2021. Surface Form Competition: Why the Highest Probability Answer Isn't Always Right. +Robert Kirk, Amy Zhang, Edward Grefenstette, and Tim Roktaschel. 2022. A Survey of Generalisation in Deep Reinforcement Learning. +Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large Language Models are Zero-Shot Reasoners. +Aviral Kumar, Aurick Zhou, George Tucker, and Sergey Levine. 2020. Conservative Q-Learning for Offline Reinforcement Learning. +Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The Power of Scale for Parameter-Efficient Prompt Tuning. arXiv:2104.08691 [cs]. +Long-Ji Lin. 1992. Self-Improving Reactive Agents Based on Reinforcement Learning, Planning and Teaching. Machine Language, 8(3-4):293-321. +Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, and Weizhu Chen. 2021. What Makes Good In-Context Examples for GPT- $\$ 3$ ? +Ming Liu, Wray Buntine, and Gholamreza Haffari. 2018. Learning How to Actively Learn: A Deep Imitation Learning Approach. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1874-1883, Melbourne, Australia. Association for Computational Linguistics. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. +Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, and Pontus Stenetorp. 2022. Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8086-8098, Dublin, Ireland. Association for Computational Linguistics. +Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2022. Rethinking the Role of Demonstrations: What Makes In-Context Learning Work? arXiv:2202.12837 [cs]. + +Swaroop Mishra, Daniel Khashabi, Chitta Baral, Yejin Choi, and Hannaneh Hajishirzi. 2022. Reframing Instructional Prompts to GPTk's Language. +Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. 2013. Playing Atari with Deep Reinforcement Learning. +Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, Xu Jiang, Karl Cobbe, Tyna Eloundou, Gretchen Krueger, Kevin Button, Matthew Knight, Benjamin Chess, and John Schulman. 2022. WebGPT: Browser-assisted question-answering with human feedback. +Andrew Y. Ng, Daishi Harada, and Stuart J. Russell. 1999. Policy Invariance Under Reward Transformations: Theory and Application to Reward Shaping. In Proceedings of the Sixteenth International Conference on Machine Learning, ICML '99, pages 278-287, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. +Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Proceedings of ACL, pages 115-124. +Deepak Pathak, Pulkit Agrawal, Alexei A. Efros, and Trevor Darrell. 2017. Curiosity-Driven Exploration by Self-Supervised Prediction. In 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pages 488-489, Honolulu, HI, USA. IEEE. +Ethan Perez, Douwe Kiela, and Kyunghyun Cho. 2021. True Few-Shot Learning with Language Models. In Advances in Neural Information Processing Systems. +Rafael Figueiredo Prudencio, Marcos R. O. A. Maximo, and Esther Luna Colombini. 2022. A Survey on Offline Reinforcement Learning: Taxonomy, Review, and Open Problems. +Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. +Jack W. Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, Eliza Rutherford, Tom Hennigan, Jacob Menick, Albin Cassirer, Richard Powell, George van den Driessche, Lisa Anne Hendricks, Maribeth Rauh, Po-Sen Huang, Amelia Glaese, Johannes Welbl, Sumanth Dathathri, Saffron Huang, Jonathan Uesato, John Mellor, Irina Higgins, Antonia Creswell, Nat McAleese, Amy Wu, Erich Elsen, Siddhant Jayakumar, Elena Buchatskaya, David Budden, Esme Sutherland, Karen Simonyan, Michela Paganini, Laurent Sifre, Lena Martens, Xiang Lorraine Li, Adhiguna Kuncoro, Aida Nematzadeh, Elena + +Gribovskaya, Domenic Donato, Angeliki Lazaridou, Arthur Mensch, Jean-Baptiste Lespiau, Maria Tsimpoukelli, Nikolai Grigorev, Doug Fritz, Thibault Sottiaux, Mantas Pajarskas, Toby Pohlen, Zhitao Gong, Daniel Toyama, Cyprien de Masson d'Autume, Yujia Li, Tayfun Terzi, Vladimir Mikulik, Igor Babuschkin, Aidan Clark, Diego de Las Casas, Aurelia Guy, Chris Jones, James Bradbury, Matthew Johnson, Blake Hechtman, Laura Weidinger, Iason Gabriel, William Isaac, Ed Lockhart, Simon Osindero, Laura Rimell, Chris Dyer, Oriol Vinyals, Kareem Ayoub, Jeff Stanway, Lorrayne Bennett, Demis Hassabis, Koray Kavukcuoglu, and Geoffrey Irving. 2022. Scaling Language Models: Methods, Analysis & Insights from Training Gopher. +Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. +Ohad Rubin, Jonathan Herzig, and Jonathan Berant. 2022. Learning To Retrieve Prompts for In-Context Learning. +Burr Settles. 2009. Active learning literature survey. +Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631-1642, Seattle, Washington, USA. Association for Computational Linguistics. +Friedrich Freiherr Von Wieser. 1893. Natural Value. Macmillan and Company. +Ellen M. Voorhees and Dawn M. Tice. 2000. Building a question answering test collection. In Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '00, pages 200-207, New York, NY, USA. Association for Computing Machinery. +Tu Vu, Brian Lester, Noah Constant, Rami Al-Rfou, and Daniel Cer. 2022. SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer. arXiv:2110.07904 [cs]. +Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, et al. 2022. Emergent abilities of large language models. arXiv preprint arXiv:2206.07682. +Zhuofeng Wu, Sinong Wang, Jiatao Gu, Rui Hou, Yuxiao Dong, V. G. Vinod Vydiswaran, and Hao Ma. 2022. IDPG: An Instance-Dependent Prompt Generation Method. arXiv:2204.04497 [cs]. +Sang Michael Xie, Aditi Raghunathan, Percy Liang, and Tengyu Ma. 2022. An Explanation of In-context Learning as Implicit Bayesian Inference. + +Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, and Luke Zettlemoyer. 2022. OPT: Open Pre-trained Transformer Language Models. + +Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level Convolutional Networks for Text Classification. In Advances in Neural Information Processing Systems, volume 28. Curran Associates, Inc. + +Tony Z. Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. 2021. Calibrate Before Use: Improving Few-Shot Performance of Language Models. + +# A Conservative Q-Learning + +The objective of standard Q-learning is to minimize the Bellman Error (BE): + +$$ +\begin{array}{l} \operatorname {B E} (Q) = \mathbb {E} _ {s, a, s ^ {\prime} \sim \mathcal {D}} \left[ r (s, a) + \right. \\ \left. \gamma \max _ {a ^ {\prime}} Q (s ^ {\prime}, a ^ {\prime}) - Q (s, a) \right]. \\ \end{array} +$$ + +An issue with offline Q-learning is there are OOD actions that do not appear in the training data. Learned Q-networks often overestimate these Q-values, resulting in the policy taking unfamiliar actions during evaluation and hurts performance. To mitigate this issue, conservative Q-learning (CQL) adds a penalty term to regularize Q-values: + +$$ +\begin{array}{l} \min _ {Q} \alpha \mathbb {E} _ {s \sim \mathcal {D}} \left[ \log \sum_ {a} \exp (Q (s, a)) - \right. \\ \left. \mathbb {E} _ {a \sim \hat {\pi} _ {\beta}} [ Q (s, a) ] \right] + \frac {1}{2} \mathrm {B E} (Q) ^ {2}, \\ \end{array} +$$ + +where $\alpha$ is a weight term, and $\hat{\pi}_{\beta}$ is the behavior policy, under which the offline transitions are collected for training. Notice this objective penalizes all unobserved actions under $\hat{\pi}_{\beta}$ . Intuitively, this regularizer leads to a policy that avoids unfamiliar actions during evaluation. We refer the interested reader to the original paper for theoretical guarantees and further details (Kumar et al., 2020). + +# B Hyperparameters + +We report the list of hyperparameters for the hyperparameter search in Table 5. We use grid search over these hyperparameters to determine the combination that maximizes validation performance. + +
HyperparameterValue
Train steps8000
Batch size16
Hidden dim (MLP)16
Replay memory size50000
Learning rate1e-4, 3e-4, 5e-4
CQL regularization weight α0, 0.1, 0.2
Target network update steps100, 200, 400
Dropout rate0, 0.25
+ +Table 5: List of hyperparameters used in our experiments. + +![](images/7c38437fe737affea24d8e9634cc4a4d0af982d7a25e1473cea2b6ae4318b33f.jpg) +Figure 5: Average NEW TASK (transfer) accuracy on 4 tasks across 5 random seeds. $95\%$ confidence intervals are reported as error bars. + +During validation, the policy picks from the reward set, and is evaluated on the training set, whereas in training, we pick from the training set and evaluate on the reward set. We point out that our validation scheme does not use extra data. + +Table 6 further includes the performance of linear policies. The performance of linear policies is better than the baselines, but clearly worse than the MLP policy. + +# C Additional Results + +We present results on the effect of unlabeled size and on transfer GPT-3. We also provide additional analysis towards understanding what makes good examples for in-context learning. + +# C.1 Effect of Unlabeled Size + +In §4.2, we noticed the number of unlabeled examples available for selection plays a role in the performance our policies. One might expect the transfer performance in the NEW TASK setting scales with unlabeled size, simply because there are additional examples to pick from. + +
MethodAverageAGNewsAmazonSST-2TREC
random59.655.210.576.312.366.212.940.84.7
max-entropy59.358.811.374.85.165.710.737.86.7
best-of-1072.572.11.991.10.681.14.445.63.5
greedy-oracle78.080.61.791.81.181.73.958.07.5
Linear policy (seen examples)65.662.87.882.78.674.25.842.82.9
Linear policy (1000 new examples)65.969.56.083.76.265.24.945.22.8
MLP policy (seen examples)71.470.87.890.41.981.03.543.32.0
MLP policy (1000 new examples)69.065.57.488.54.276.77.545.45.0
+ +Table 6: SAME TASK accuracy on AGNews, Amazon, SST-2 and TREC, across 5 random seeds, with our methods (using MLP and Linear networks as policies). $95\%$ confidence intervals are reported as subscripts. + +In Figure 5, we plot average accuracies in the NEW TASK setting, where we train our policies on three datasets and evaluate on a held-out dataset. Here, we notice the benefit of a larger unlabeled set is twofold, both in increasing transfer performance, and in reducing variance. That said, the improvement is not necessarily monotonic due to the large variance. Interestingly, our learned policy is performant even when the unlabeled set is small. Picking from 50 unlabeled examples, our policies reaches an average accuracy of $63.3\%$ , still manage to outperform random demonstration $(59.6\%)$ . + +# C.2 Transfer to GPT-3 + +Despite demonstrating abilities to generalize across tasks, it is yet clear whether learned policies on GPT-2 can generalize to other models, such as GPT-3. In table 7, we report the performance of transferring both learned policies and selected examples from GPT-2 to GPT-3 ADA, BABBAGE and CURIE. + +We observe mixed results when transferring to GPT-3. With an uncalibrated ADA model, we observe a small, but measurable improvement of transferring either policy (1.1%) or examples directly (0.9%). Such a trend holds for the calibrated ADA model too (0.4% and 1.9%). Despite the improved performance, the benefits of variance reduction is diminished. Perhaps surprising is the generalization of learned policies: it suggests different models could indeed share similar preferences for demonstration examples. + +On the other hand, we observe negative results when transferring to BABBAGE. When transferring learned policy to an uncalibrated BABBAGE model, we notice the performance drops by $1.6\%$ . For cost considerations, we run CURIE experiments for one + +random set and do not report variance. Marginal gains are observed when transferring policy to the uncalibrated model (1.8%) and examples to the calibrated model (1.0%). In other scenarios, transfer results match or underperform base models. As the observed results could be attributed to randomness, we hold short of drawing conclusions. + +# C.3 Coefficients in Linear Policies + +Although linear policies perform worse than the MLP, they are more interpretable. Figure 6 shows the coefficients of feature representations of actions for AGNews and SST-2. The average coefficient of entropy is indeed positive, suggesting that strategies encouraging class balance have some value. However, it is often not the most important feature. For example, positive examples in SST-2 matter more, which is consistent with our observation in the main paper. Moreover, the variance is large, highlighting the challenges in learning a generalizable policy. + +# C.4 Effect of Length + +We also examine the effect of length on in-context learning. Intuitively, one might expect longer examples to be more meaningful. However, we do not see a correlation between length and accuracy in AGNews and TREC, and a non-significant negative correlations in SST-2. In Amazon, we observe a statistically significant (p-value = 0.019), but weak correlation between length and accuracy. Overall, there is no evidence suggesting longer examples improve in-context learning performance. + +
ModelAverageAGNewsAmazonSST-2TREC
ADA59.062.915.387.05.365.08.921.25.8
ADA (C)62.564.03.590.01.173.88.522.14.6
GPT-2 policy → ADA60.151.815.589.11.773.315.026.23.9
GPT-2 policy → ADA (C)62.955.65.989.72.286.71.619.51.4
GPT-2 examples → ADA59.948.912.589.32.574.811.426.63.9
GPT-2 examples → ADA (C)64.462.08.388.73.284.03.623.05.3
BABBAGE70.368.012.393.40.792.22.427.45.1
BABBAGE (C)74.478.15.392.71.490.81.036.03.5
GPT-2 policy → BABBAGE68.758.05.993.62.290.61.632.51.4
GPT-2 policy → BABBAGE (C)74.475.15.393.40.590.31.738.86.1
GPT-2 examples → BABBAGE65.842.610.093.00.491.12.936.68.4
GPT-2 examples → BABBAGE (C)73.673.97.393.10.591.11.836.22.6
CURIE74.276.794.793.831.4
CURIE (C)76.369.894.893.447.0
GPT-2 policy → CURIE76.081.295.796.031.0
GPT-2 policy → CURIE (C)75.475.895.493.038.2
GPT-2 examples → CURIE74.477.793.894.331.8
GPT-2 examples → CURIE (C)77.379.893.194.641.8
+ +Table 7: Transfer of policies and examples learned on GPT-2 to various GPT-3 models across 5 random sets of 4-shot demonstration examples. C indicates calibration. $95\%$ confidence intervals are reported as subscripts. Due to resource constraints, we limit experiments with CURIE to 1 random set. + +![](images/fb8123996886ca9d8c4140262ff6ac9c4456562d932a2ee7bbcfb132fcfe9cc2.jpg) +(a) AGNews + +![](images/d3dad01247ca45046e3c48987037e3ef75e110eb06f7f2783172b00c17179344.jpg) +(b) SST-2 +Figure 6: Average coefficients of linear policies trained on AGNews and SST-2 across 5 runs. Error bars show the standard deviation. + +![](images/c4594ebd2f593ffdc3a64805a23bc4853ce5c5a266fc04a00effa2c575e27d92.jpg) +(a) AGNews $(r = -0.01)$ + +![](images/27b61c2552d78716cd5f40d163bac8000b9fa47e82317dc98e3a5ca512e6ac44.jpg) +(b) Amazon $(r = -0.23^{*})$ + +![](images/27509bd5eaa0b71c55854046d406cb7cf0b30f1098e2dd4240cd94694bd82d4c.jpg) +(c) SST-2 $(r = -0.08)$ + +![](images/f047c14e8238422b1c3501e82811f9d99b95d153ec2782ddcecc715f86e7385c.jpg) +(d) TREC $(r = -0.00)$ +Figure 7: Correlation between length (number of words) of the demonstration prompt and in-context learning performance across 100 sets of randomly sample 4-shot demonstration. * indicates a p-value $< 0.05$ . \ No newline at end of file diff --git a/activeexampleselectionforincontextlearning/images.zip b/activeexampleselectionforincontextlearning/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..23490e94fe548bef674caa132b7cde766e40482a --- /dev/null +++ b/activeexampleselectionforincontextlearning/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:154b223ff664b3aa73227f29d1d3752541eaa9a396cb1dae444ab24574642a62 +size 585056 diff --git a/activeexampleselectionforincontextlearning/layout.json b/activeexampleselectionforincontextlearning/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..e9a317b84a154d72be23b207c910149ea4b74554 --- /dev/null +++ b/activeexampleselectionforincontextlearning/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7c0e34e02e0cfab8c7f53322d80d14ca6577f47763885fc9a9fff3efc8b27ed0 +size 456221 diff --git a/adamixmixtureofadaptationsforparameterefficientmodeltuning/4d2f7f55-bfd8-4bb9-b067-0fabeb8655d7_content_list.json b/adamixmixtureofadaptationsforparameterefficientmodeltuning/4d2f7f55-bfd8-4bb9-b067-0fabeb8655d7_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..12bb00f7f572a7302c85dcce510ea6a2a38db8c1 --- /dev/null +++ b/adamixmixtureofadaptationsforparameterefficientmodeltuning/4d2f7f55-bfd8-4bb9-b067-0fabeb8655d7_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:88eba0b943890474e18fdd9aa11f6a118919ed6c1ac8bfff9995d8838df7f4bc +size 109872 diff --git a/adamixmixtureofadaptationsforparameterefficientmodeltuning/4d2f7f55-bfd8-4bb9-b067-0fabeb8655d7_model.json b/adamixmixtureofadaptationsforparameterefficientmodeltuning/4d2f7f55-bfd8-4bb9-b067-0fabeb8655d7_model.json new file mode 100644 index 0000000000000000000000000000000000000000..af2026ba519c4aa821670ab8bb260a228622e09d --- /dev/null +++ b/adamixmixtureofadaptationsforparameterefficientmodeltuning/4d2f7f55-bfd8-4bb9-b067-0fabeb8655d7_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:88622679c96c810a3f4edfc026c0540da550341c04e7009d87b20de61ce26fbe +size 130955 diff --git a/adamixmixtureofadaptationsforparameterefficientmodeltuning/4d2f7f55-bfd8-4bb9-b067-0fabeb8655d7_origin.pdf b/adamixmixtureofadaptationsforparameterefficientmodeltuning/4d2f7f55-bfd8-4bb9-b067-0fabeb8655d7_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..cc38c1c56b2b859f141aa40323fb2a4b8949d102 --- /dev/null +++ b/adamixmixtureofadaptationsforparameterefficientmodeltuning/4d2f7f55-bfd8-4bb9-b067-0fabeb8655d7_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:71538fee8d64d4540bfae065ab58f909e816fb5f68b63994b7ef5d2fa0048aac +size 912766 diff --git a/adamixmixtureofadaptationsforparameterefficientmodeltuning/full.md b/adamixmixtureofadaptationsforparameterefficientmodeltuning/full.md new file mode 100644 index 0000000000000000000000000000000000000000..a6856a5a00cc7926ea1431b9d1d46a0d99897d7a --- /dev/null +++ b/adamixmixtureofadaptationsforparameterefficientmodeltuning/full.md @@ -0,0 +1,434 @@ +# AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning + +Yaqing Wang* + +Purdue University + +wang5075@purdue.edu + +Sahaj Agarwal + +Microsoft + +sahagar@microsoft.com + +Subhabrata Mukherjee† + +Microsoft Research + +submukhe@microsoft.com + +Xiaodong Liu + +Microsoft Research + +Jing Gao + +Purdue University + +Ahmed Hassan Awadallah + +Microsoft Research + +Jianfeng Gao + +Microsoft Research + +# Abstract + +Standard fine-tuning of large pre-trained language models (PLMs) for downstream tasks requires updating hundreds of millions to billions of parameters, and storing a large copy of the PLM weights for every task resulting in increased cost for storing, sharing and serving the models. To address this, parameter-efficient fine-tuning (PEFT) techniques were introduced where small trainable components are injected in the PLM and updated during fine-tuning. We propose AdaMix as a general PEFT method that tunes a mixture of adaptation modules – given the underlying PEFT method of choice – introduced in each Transformer layer while keeping most of the PLM weights frozen. For instance, AdaMix can leverage a mixture of adapters like Houlsby (Houlsby et al., 2019) or a mixture of low rank decomposition matrices like LoRA (Hu et al., 2021) to improve downstream task performance over the corresponding PEFT methods for fully supervised and few-shot NLU and NLG tasks. Further, we design AdaMix such that it matches the same computational cost and the number of tunable parameters as the underlying PEFT method. By only tuning $0.1 - 0.2\%$ of PLM parameters, we show that AdaMix outperforms SOTA parameter-efficient fine-tuning and full model fine-tuning for both NLU and NLG tasks. Code and models are made available at https://aka.ms/AdaMix. + +# 1 Introduction + +Standard fine-tuning of large pre-trained language models (PLMs) (Devlin et al., 2019; Liu et al., 2019; Brown et al., 2020; Raffel et al., 2019) to downstream tasks requires updating all model parameters. Given the ever-increasing size of PLMs (e.g., 175 billion parameters for GPT-3 (Brown et al., 2020) and 530 billion parameters for MT-NLG (Smith et al., 2022)), even the fine-tuning step becomes expensive as it requires storing a full copy + +![](images/7b40909bb1960e3e6eb7cf234ddc5b344dda10f1f9e3df9bf8cf3be9471d8ebd.jpg) +Figure 1: Performance of different parameter-efficient fine-tuning methods on GLUE development set with RoBERTa-large encoder following a setup similar to (Houlsby et al., 2019) for fair comparison. We report the performance of Pfeiffer (Pfeiffer et al., 2021), Houlsby (Houlsby et al., 2019) and LoRA (Hu et al., 2021) with their default number of fine-tuned parameters as well as the number of fine-tuned parameters used in AdaMix with a mixture of adaptations. Red dash shows the performance of full model fine-tuning. + +of model weights for every task. To address these challenges, recent works have developed parameter-efficient fine-tuning (PEFT) techniques. These approaches typically underperform standard full model fine-tuning, but significantly reduce the number of trainable parameters. There are many varieties of PEFT methods, including prefix-tuning (Li and Liang, 2021) and prompt-tuning (Lester et al., 2021) to condition frozen language models via natural language task descriptions, low dimensional projections using adapters (Houlsby et al., 2019; Pfeiffer et al., 2020, 2021) and more recently using low-rank approximation (Hu et al., 2021). Figure 1 shows the performance of some popular PEFT methods with varying number of tunable parameters. We observe a significant performance gap with respect to full model tuning where all PLM parameters are updated. + +In this paper, we present AdaMix, a mixture of adaptation modules approach, and show that it outperforms SOTA PEFT methods and also full model fine-tuning while tuning only $0.1 - 0.2\%$ of PLM parameters. + +In contrast to traditional PEFT methods that use a single adaptation module in every Transformer layer, AdaMix uses several adaptation modules that learn multiple views of the given task. In order to design this mixture of adaptations, we take inspiration from sparsely-activated mixture-of-experts (MoE) models. In traditional dense models (e.g., BERT (Devlin et al., 2019), GPT-3 (Brown et al., 2020)), all model weights are activated for every input example. MoE models induce sparsity by activating only a subset of the model weights for each incoming input. + +Consider adapters (Houlsby et al., 2019), one of the most popular PEFT techniques, to illustrate our method. A feedforward layer (FFN) is introduced to down-project the hidden representation to a low dimension $d$ (also called the bottleneck dimension) followed by another up-project FFN to match the dimensionality of the next layer. Instead of using a single adapter, we introduce multiple project-up and project-down FFNs in each Transformer layer. We route input examples to one of the project-up and one of the project-down FFN's resulting in the same amount of computational cost (FLOPs) as that of using a single adapter. For methods like LoRA (Hu et al., 2021), that decomposes the gradient of pre-trained weights into low-rank matrices $(A$ and $B)$ , we introduce multiple low-rank decompositions and route the input examples to them similar to adapters. + +We discuss different routing mechanisms and show that stochastic routing yields good performance while eliminating the need for introducing any additional parameters for module selection. To alleviate training instability that may arise from the randomness in selecting different adaptation modules in different training steps, we leverage consistency regularization and the sharing of adaptation modules during stochastic routing. + +The introduction of multiple adaptation modules results in an increased number of adaptation parameters. This does not increase computational cost but increases storage cost. To address this, we develop a merging mechanism to combine weights from different adaptation modules to a single module in each Transformer layer. This allows us to keep the number of adaptation parameters the same as that of a single adaptation module. Our merging mechanism is inspired by model weight averaging model soups (Wortsman et al., 2022) and multi BERTs (Sellam et al., 2022). Weight averaging + +of models with different random initialization has been shown to improve model performance in recent works (Matena and Raffel, 2021; Neyshabur et al., 2020; Frankle et al., 2020) that show the optimized models to lie in the same basin of error landscape. While the above works are geared towards fine-tuning independent models, we extend this idea to parameter-efficient fine-tuning with randomly initialized adaptation modules and a frozen language model. + +Overall, our work makes the following contributions: + +(a) We develop a new method AdaMix as a mixture of adaptations for parameter-efficient fine-tuning (PEFT) of large language models. Given any PEFT method of choice like adapters and low-rank decompositions, AdaMix improves downstream task performance over the underlying PEFT method. +(b) AdaMix is trained with stochastic routing and adaptation module merging to retain the same computational cost (e.g., FLOPs, #tunable adaptation parameters) and benefits of the underlying PEFT method. To better understand how AdaMix works, we demonstrate its strong connections to Bayesian Neural Networks and model ensembling. +(c) By tuning only $0.1 - 0.2\%$ of a pre-trained language model's parameters, AdaMix is the first PEFT method to outperform full model fine-tuning methods for all NLU tasks on GLUE, and outperforms other competing methods for NLG and few-shot NLU tasks. + +Practical benefits of PEFT methods. The most significant benefit of PEFT methods comes from the reduction in memory and storage usage. For a Transformer, the VRAM consumption can be significantly reduced as we do not need to keep track of optimizer states for the frozen parameters. PEFT methods also allow multiple tasks to share the same copy of the full (frozen) PLM. Hence, the storage cost for introducing a new task can be reduced by up to 444x (from 355MB to 0.8MB with RoBERTa-large encoder in our setting). + +We present background on Mixture-of-Experts (MoE) and adapters in Section A of Appendix. + +# 2 Mixture-of-Adaptations + +Consider a set of $M$ adaptation modules injected in each Transformer layer, where $A_{ij} : i \in \{1 \cdots L\}, j \in \{1 \cdots M\}$ represents the $j^{th}$ adaptation module in the $i^{th}$ Transformer layer. For illustration, we will consider adapters (Houlsby + +![](images/d49dee8a5db38fc4a7071b17fe1020abb86c74f392a7030bc36989fc73b4817b.jpg) +Figure 2: Mixture-of-Adaptations (AdaMix) with adapters (Houlsby et al., 2019) as the underlying PEFT mechanism. For illustration, we show $M = 4$ adaptation modules consisting of feedforward up (FFN_U) feedforward down (FFN_D) projection matrices. The above block shown for one Transformer layer is repeated across all the layers. AdaMix stochastically routes instances from an input batch via randomly selected adaptation modules resulting in FLOPs match to a single module with consistency regularization and parameter sharing. Adaptation merging (Figure 3) collapses multiple modules to match single-module parameters in each layer. + +et al., 2019) as the underlying parameter-efficient fine-tuning (PEFT) mechanism as a running example. Similar principles can be used for other PEFT mechanism like LoRA (Hu et al., 2021) for low-rank decompositions as we show in experiments. + +We adopt the popularly used Transformer architecture (Vaswani et al., 2017) consisting of $L$ repeated Transformer blocks, where each block consists of a self-attention sub-layer, a fully connected feed-forward network (FFN) and residual connections around the sub-layers followed by layer normalization. Each adaptation module $A_{ij}$ corresponding to the adapters (Houlsby et al., 2019) consists of a feedforward up $\mathcal{W}_{ij}^{up}$ and a feedforward down $\mathcal{W}_{ij}^{down}$ projection matrices. + +# 2.1 Routing Policy + +Recent work like THOR (Zuo et al., 2021) has demonstrated stochastic routing policy like random routing to work as well as classical routing mechanism like Switch routing (Fedus et al., 2021) with the following benefits. Since input examples are randomly routed to different experts, there is no requirement for additional load balancing as each expert has an equal opportunity of being activated simplifying the framework. Further, there are no added parameters, and therefore no additional computation, at the Switch layer for expert selection. The latter is particularly important in our setting for parameter-efficient fine-tuning to keep the parameters and FLOPs the same as that of a single adap + +tation module. To analyze the working of AdaMix, we demonstrate connections to stochastic routing and model weight averaging to Bayesian Neural Networks and model ensembling in Section 2.5. + +In the stochastic routing policy for AdaMix with adapters, at any training step, we randomly select a pair of feedforward up and feedforward down projection matrices in the $i^{th}$ Transformer layer as $A_{i} = \{\mathcal{W}_{ij}^{up},\mathcal{W}_{ik}^{down}\}$ and $B_{i} = \{\mathcal{W}_{ij^{\prime}}^{up},\mathcal{W}_{ik^{\prime}}^{down}\}$ respectively. Given this selection of adaptation modules $A_{i}$ and $B_{i}$ in each Transformer layer in every step, all the inputs in a given batch are processed through the same set of modules. Given an input representation $x$ in a given Transformer layer, the above pair of modules perform the following transformations: + +$$ +x \leftarrow x + f (x \cdot \mathcal {W} ^ {\text {d o w n}}) \cdot \mathcal {W} ^ {\text {u p}} \tag {1} +$$ + +Such stochastic routing enables adaptation modules to learn different transformations during training and obtain multiple views of the task. However, this also creates a challenge on which modules to use during inference due to random routing protocol during training. We address this challenge with the following two techniques that further allow us to collapse adaptation modules and obtain the same computational cost (FLOPs, #tunable adaptation parameters) as that of a single module. + +![](images/72421d91d23ec511278f293e0b676f684de7860573f1de029ee68cb5f69ea5a3.jpg) +Figure 3: Stochastic routing during training activates different adaptation modules to have multiple views of the task with FLOPs match to a single module. Merging weights of the adaptation modules $\left(\{\mathrm{FFN\_U}_i\}, \{\mathrm{FFN\_D}_i\}: i \in \{1 \cdots 4\}\right)$ by averaging preserves improved performance with parameter match to a single-module. + +# 2.2 Consistency regularization + +Consider $\mathcal{A} = \{A_{i=1}^{L}\}$ and $\mathcal{B} = \{B_{i=1}^{L}\}$ to be the sets of adaptation modules (e.g., projection matrices) activated during two stochastic forward passes through the network for an input $x$ across $L$ layers of the Transformer. The objective of consistency regularization is to enable the adaptation modules to share information and prevent divergence. To this end, we add the following consistency loss as a regularizer to the task-specific optimization loss: + +$$ +\mathcal {L} = - \Big (\sum_ {c = 1} ^ {C} \mathcal {I} (x, c) \log \operatorname {s o f t m a x} \left(z _ {c} ^ {\mathcal {A}} (x)\right) + +$$ + +$$ +\left. \right. \frac {1}{2} \left( \right.\mathcal {K L} \left( \right.z _ {(.)} ^ {\mathcal {A}} (x) \left. \right| | z _ {(.)} ^ {\mathcal {B}} (x)\left. \right) + \mathcal {K L} \left( \right.z _ {(.)} ^ {\mathcal {B}} (x) \left. \right| | z _ {(.)} ^ {\mathcal {A}} (x)\left. \right)\left. \right) \tag {2} +$$ + +where $\mathcal{I}(x,c)$ is a binary indicator (0 or 1) if class label $c$ is the correct classification for $x$ and $z_{(\cdot)}^{\mathcal{A}}(x)$ and $z_{(\cdot)}^{\mathcal{B}}(x)$ are the predicted logits while routing through two sets of adaptation modules $\mathcal{A}$ and $\mathcal{B}$ respectively with $\mathcal{KL}$ denoting the Kullback-Leibler divergence. $x$ is the input representation from the PLM with frozen parameters and only the parameters of modules $\{\mathcal{W}^{up},\mathcal{W}^{down}\}$ are updated during training. + +# 2.3 Adaptation module merging + +While the above regularization mitigates inconsistency in random module selection during inference, it still results in increased serving cost to host several adaptation modules. Prior works in fine-tuning language models for downstream tasks have shown improved performance on averaging the weights of different models fine-tuned with different random seeds outperforming a single fine-tuned model. Recent work (Wortsman et al., 2022) has also shown that differently fine-tuned models from the same + +initialization lie in the same error basin motivating the use of weight aggregation for robust task summarization. We adopt and extend prior techniques for language model fine-tuning to our parameter-efficient training of multi-view adaptation modules. + +In contrast to the aforementioned techniques like stochastic routing and consistency regularization that are applied at the training phase, we employ adaptation merging only during inference. Given a set of adaptation modules, $\mathcal{W}_{ij}^{up}$ and $\mathcal{W}_{ik}^{down}$ for $i\in \{1\dots L\}$ and $\{j,k\} \in \{1\dots M\}$ , we simply average the weights of all the corresponding modules (e.g., project-up or project-down matrices) in every Transformer layer to collapse to a single module $\{\mathcal{W}_i^{\prime up},\mathcal{W}_i^{\prime down}\}$ , where: + +$$ +\mathcal {W} _ {i} ^ {\prime u p} \leftarrow \frac {1}{M} \sum_ {j = 1} ^ {M} \mathcal {W} _ {i j} ^ {u p} \quad \mathcal {W} _ {i} ^ {\prime d o w n} \leftarrow \frac {1}{M} \sum_ {j = 1} ^ {M} \mathcal {W} _ {i j} ^ {d o w n} \tag {3} +$$ + +# 2.4 Adaptation module sharing + +While stochastic routing to multi-view adaptation modules increases the model capacity, it can also impact downstream tasks with less amounts of labeled data for tuning several sets of adaptation modules. To address this challenge, we use another mechanism to share some of the adaption modules (e.g., project-down or the project-up operations) to improve training efficiency. In the standard setting for adapters, we share only the feedforward projection-up matrices i.e., $\mathcal{W}_{ij}^{up} = \mathcal{W}_i^{up}$ . We investigate these design choices via ablation studies in our experiments in Section 3.3 and Section C in Appendix. + +# 2.5 Connection to Bayesian Neural Networks and Model Ensembling + +Bayesian Neural Network (BNN) (Gal and Ghahramani, 2015) replaces a deterministic model's weight parameters by a distribution over the parameters. For inference, BNN averages over all the possible weights, also referred to as marginalization. Consider $f^{\mathcal{W}(x)} \in \mathbb{R}^d$ to be the $d$ -dimensional output of such a neural network where the model likelihood is given by $p(y|f^{\mathcal{W}(x)})$ . In our setting, $\mathcal{W} = \langle \mathcal{W}^{up}, \mathcal{W}^{down} \rangle$ along with frozen PLM parameters that are dropped from the notation for simplicity. For classification, we can further apply a softmax likelihood to the output to obtain: $P(y = c|x,W) = \text{softmax}(f^{\mathcal{W}(x)})$ . Given an instance $x$ , the probability distribution over the classes is given by marginalization over the pos- + +terior distribution as: $p(y = c|x) = \int_{\mathcal{W}}p(y = c|f^{\mathcal{W}(x)})p(\mathcal{W}|X,Y)d\mathcal{W}$ . + +This requires averaging over all possible model weights, which is intractable in practice. Therefore, several approximation methods have been developed based on variational inference methods and stochastic regularization techniques using dropouts. In this work, we leverage another stochastic regularization in the form of random routing. Here, the objective is to find a surrogate distribution $q_{\theta}(w)$ in a tractable family of distributions that can replace the true model posterior that is hard to compute. The ideal surrogate is identified by minimizing the Kullback-Leibler (KL) divergence between the candidate and the true posterior. + +Consider $q_{\theta}(\mathcal{W})$ to be the stochastic routing policy which samples $T$ masked model weights $\{\widetilde{\mathcal{W}}_t\}_{t=1}^T \sim q_{\theta}(\mathcal{W})$ . For classification tasks, the approximate posterior can be now obtained by Monte-Carlo integration (Gal et al., 2017) as: + +$$ +\begin{array}{l} p (y = c | x) \approx p (y = c | f ^ {\mathcal {W}} (x)) q _ {\theta} (\mathcal {W}) d \mathcal {W} \\ \approx \frac {1}{T} \sum_ {t = 1} ^ {T} p (y = c | f ^ {\widetilde {\mathcal {W}} _ {t}} (x)) \tag {4} \\ = \frac {1}{T} \sum_ {t = 1} ^ {T} \operatorname {s o f t m a x} \left(f ^ {\widetilde {\mathcal {W}} _ {t}} (x)\right) \\ \end{array} +$$ + +However, computing the approximate posterior above in our setting requires storing all the stochastic model weights $\mathcal{W}_t(x)$ which increases the serving cost during inference. To reduce this cost, we resort to the other technique for weight averaging via adaptation module merging during inference. + +Let $\mathcal{L}_{\mathcal{W}}^{AM} = \mathbb{E}_{x,y}\mathcal{L}(softmax(f^{\widetilde{\mathcal{W}}}(\boldsymbol {x}),\boldsymbol {y})$ denote the expected loss with merging of the stochastic adaptation weights with $\widetilde{\mathcal{W}} = \frac{1}{T}\sum_{t}\widetilde{\mathcal{W}}_{t}$ (from Equation 3) and $\mathcal{L}$ denoting the cross-entropy loss. Consider $\mathcal{L}_{\mathcal{W}}^{Ens} = \mathbb{E}_{x,y}\mathcal{L}(\frac{1}{T}\sum_{t = 1}^{T}softmax(f^{\widetilde{\mathcal{W}}_t}(\boldsymbol {x}),\boldsymbol {y}))$ denote the expected loss from logit-level stochastic model ensembling (from Equation 4). + +Prior work (Wortsman et al., 2022) shows that averaging the weights of multiple models fine-tuned with different hyper-parameters improves model performance. They analytically show the similarity in loss between weight-averaging $(\mathcal{L}_{\mathcal{W}}^{AM}$ in our setting) and logit-ensembling $(\mathcal{L}_{\mathcal{W}}^{Ens}$ in our setting) as a function of the flatness of the loss and confidence of the predictions. While the above analysis is geared towards averaging of multiple independently fine-tuned model weights, we can apply a + +similar analysis in our setting towards averaging of multiple stochastically obtained adaptation weights in obtaining a favorable loss $\mathcal{L}_{\mathcal{W}}^{AM}$ . Further, adaptation merging reduces the serving cost during inference since we need to retain only one copy of the merged weights as opposed to logit-ensembling which requires copies of all the adaptation weights + +# 3 Experiments + +# 3.1 Experimental Setup + +Dataset. We perform experiments on a wide range of tasks including eight natural language understanding (NLU) tasks in the General Language Understanding Evaluation (GLUE) benchmark (Wang et al., 2019) and three natural language generation (NLG) tasks, namely, E2E (Novikova et al., 2017), WebNLG (Gardent et al., 2017) and DART (Nan et al., 2020). For the NLU and NLG tasks, we follow the same setup as (Houlsby et al., 2019) and (Li and Liang, 2021; Hu et al., 2021), respectively. + +Baselines. We compare AdaMix to full model fine-tuning and several state-of-the-art parameter-efficient fine-tuning (PEFT) methods, namely, Pfeiffer Adapter (Pfeiffer et al., 2021), Houlsby Adapter (Houlsby et al., 2019), BitFit (Zaken et al., 2021), Prefix-tuning (Li and Liang, 2021), UNIPELT (Mao et al., 2021) and LoRA (Hu et al., 2021). We use BERT-base (Devlin et al., 2019) and RoBERTa-large (Liu et al., 2019) as encoders for NLU tasks (results in Table 1 and Table 2), and GPT-2 (Brown et al., 2020) for NLG tasks (results in Table 3). + +AdaMix implementation details. We implement AdaMix in Pytorch and use Tesla V100 gpus for experiments with detailed hyper-parameter configurations presented in Section E in Appendix. AdaMix with adapters uses a dimension of 16 and 48 using BERT-base and RoBERTa-large encoders following the setup of (Hu et al., 2021; Mao et al., 2021) for fair comparison. AdaMix with LoRA uses rank $r = 4$ following the setup of (Hu et al., 2021) to keep the same number of adaptation parameters during inference. The number of adaptation modules in AdaMix is set to 4 for all the tasks and encoders unless otherwise specified. The impact of adapter dimension and number of adaptation modules for NLU tasks are investigated in Table 9 and 10. For most of the experiments and ablation analysis, we report results from AdaMix with adapters for NLU tasks. For demonstrating the generalizability of our framework, we report results from AdaMix with LoRA (Hu et al., 2021) as the under + +
Model#Param.MNLI AccQNLI AccSST2 AccQQP AccMRPC AccCoLA MccRTE AccSTS-B PearsonAvg.
Full Fine-tuning†355.0M90.294.796.492.290.968.086.692.488.9
Pfeiffer Adapter†3.0M90.294.896.191.990.268.383.892.188.4
Pfeiffer Adapter†0.8M90.594.896.691.789.767.880.191.987.9
Houlsby Adapter†6.0M89.994.796.292.188.766.583.491.087.8
Houlsby Adapter†0.8M90.394.796.391.587.766.372.991.586.4
LoRA†0.8M90.694.896.291.690.268.285.292.388.6
AdaMix Adapter0.8M90.995.497.192.391.970.289.292.489.9
+ +Table 1: Results for NLU tasks on GLUE development set with RoBERTa-large encoder. The best result on each task is in bold and “-” denotes missing measure. AdaMix with a mixture of adapters outperforms all competing methods as well as fully fine-tuned large model with only $0.23\%$ tunable parameters.† denotes results reported from (Hu et al., 2021). Mcc refers to Matthews correlation coefficient, and Pearson refers to Pearson correlation. #Param. denotes the number of tunable adaptation parameters used during inference. + +lying PEFT mechanism for NLG tasks. + +# 3.2 Key Results + +# 3.2.1 NLU Tasks + +Tables 1 and 2 show the performance comparison among PEFT models with RoBERTa-large and BERT-base encoders respectively. Fully fine-tuned RoBERTa-large and BERT-base provide the ceiling performance. We observe AdaMix with a mixture-of-adapters to significantly outperform other state-of-the-art baselines on most tasks with different encoders. AdaMix with adapters is the only PEFT method which outperforms full model fine-tuning on all the tasks and on average score. + +
Model#Param.Avg.
Full Fine-tuning†110M82.7
Houlsby Adapter†0.9M83.0
BitFitdiamond0.1M82.3
Prefix-tuning†0.2M82.1
LoRA†0.3M82.2
UNIPELT (AP)†1.1M83.1
UNIPELT (APL)†1.4M83.5
AdaMix Adapter0.9M84.5
+ +Table 2: Results for NLU tasks on GLUE development set with BERT-base encoder and AdaMix with a mixture-of-adapters. The best result on each task is in bold. $\dagger$ and $\diamond$ denote results reported from (Mao et al., 2021; Zaken et al., 2021). Detailed task-specific results are reported in Table 13 of Appendix. #Param. refers to the number of tunable adaptation parameters during inference. + +# 3.2.2 NLG Tasks + +AdaMix leverages mixture of adaptations to improve over underlying PEFT method as demonstrated in Table 3 for E2E NLG i.e. AdaMix with LoRA and AdaMix with adapters outperform + +LoRA (Hu et al., 2021) and adapters (Houlsby et al., 2019) respectively. We report results on Dart and WebNLG in Tables 4 and 5 in Appendix. + +# 3.2.3 Few-shot NLU + +In contrast to the fully supervised setting in the above experiments, we also perform few-shot experiments on six GLUE tasks following the same setup (e.g., shots, train and test splits) and evaluation as in (Wang et al., 2021). Detailed experimental configuration presented in Section B of Appendix. AdaMix uses a mixture-of-adapters with prompt-based fine-tuning (Gao et al., 2021). + +Table 6 shows the performance comparison among different PEFT methods with $|K| = 30$ labeled examples with RoBERTa-large as frozen encoder. We observe significant performance gap for most PEFT methods with full model prompt-based fine-tuning i.e. with all model parameters being updated. AdaMix with adapters outperforms full model tuning performance for few-shot NLU similar to that in the fully supervised setting. Note that AdaMix and LiST (Wang et al., 2021) use similar adapter design with prompt-based fine-tuning. + +# 3.3 Ablation Study + +We perform all the ablation analysis on AdaMix with adapters for parameter-efficient fine-tuning. + +Analysis of adaptation merging. In this ablation study, we do not merge adaptation modules and consider two different routing strategies at inference time: (a) randomly routing input to any adaptation module, and (b) fixed routing where we route all the input to the first adaptation module in AdaMix. From Table 7, we observe AdaMix with adaptation merging to perform better than any of the other variants without the merging mechanism. + +
Model#Param.BLEUNISTMETROUGE-LCIDEr
Full Fine-tuning†354.92M68.28.6246.271.02.47
Lin AdapterL†0.37M66.38.4145.069.82.40
Lin Adapter†11.09M68.98.7146.171.32.47
Houlsby Adapter†11.09M67.38.5046.070.72.44
FTTop2†25.19M68.18.5946.070.82.41
PreLayer†0.35M69.78.8146.171.42.49
LoRA†0.35M70.48.8546.871.82.53
LoRA (repr.)0.35M69.88.7746.671.82.52
AdaMix Adapter0.42M69.88.7546.871.92.52
AdaMix LoRA0.35M71.08.8946.872.22.54
+ +Table 3: Results on E2E NLG Challenge with GPT-2 medium backbone. Best result on each task is in bold. We report AdaMix results with both adapters and LoRA as underlying PEFT method. AdaMix outperforms all competing methods as well as fully fine-tuned large model with only $0.1\%$ tunable parameters.† denotes results reported from (Hu et al., 2021) and repr. denotes reproduced results. #Param. denotes the number of tunable adaptation parameters used during inference. Results on DART and WebNLG presented in Tables 4 and 5 in Appendix. + +
Model#Param.BLEU
Full Fine-tuning†354.92M46.2
Lin AdapterL†0.37M42.4
Lin Adapter†11.09M45.2
FTTop2†25.19M41.0
PrefLayer†0.35M46.4
LoRA†0.35M47.1
LoRA (repr.)0.35M47.35
AdaMix Adapter0.42M47.72
AdaMix LoRA0.35M47.86
+ +Notably, all of the AdaMix variants outperform full model tuning. + +Moreover, Figure 4 shows that the performance of merging mechanism is consistently better than the average performance of random routing and comparable to the best performance of random routing. + +Averaging weights v.s. ensembling logits. We compare AdaMix with a variant of logit ensembling, denoted as AdaMix-Ensemble. To this end, we make four random routing passes through the network for every input $(T = 4)$ and average the logits from different passes as the final predicted logit. Inference time for this ensembling method is $4 \times$ AdaMix. We run repeated experiments with three different seeds and report mean performance in Ta + +Table 4: Results on DART with GPT-2 backbone encoder. Best result on each task is in bold. We report AdaMix results with both adapters and LoRA as underlying PEFT method. AdaMix outperforms all competing methods as well as fully fine-tuned large model with only $0.1\%$ tunable parameters. $^{\dagger}$ denotes results reported from (Hu et al., 2021) and repr. denotes reproduced results. #Param. denotes the number of tunable adaptation parameters used during inference. + +
Model#Param.BLEU
Full Fine-tuning†354.92M46.5
Lin AdapterL†0.37M50.2
Lin Adapter†11.09M54.9
FTTop2†25.19M36.0
Prefix†0.35M55.1
LoRA†0.35M55.3
LoRA (repr.)0.35M55.37
AdaMix Adapter0.42M54.94
AdaMix LoRA0.35M55.64
+ +Table 5: Results on WebNLG with GPT-2 medium backbone. The results are based on all categories in the test set of WebNLG. Best result on each task is in bold. We report AdaMix results with both adapters and LoRA as underlying PEFT method. AdaMix outperforms all competing methods as well as fully fine-tuned large model with only $0.1\%$ tunable parameters. $^{\dagger}$ denotes results reported from (Hu et al., 2021) and repr. denotes reproduced results. #Param. denotes the number of tunable adaptation parameters used during inference. + +ble 7. We observe AdaMix with adaptation weight averaging to outperform logit-ensembling following our analysis $(\mathcal{L}_{\mathcal{W}}^{AM}$ v.s. $\mathcal{L}_{\mathcal{W}}^{Ens})$ in Section 2.5. + +Analysis of consistency regularization. We drop consistency regularization during training for ablation and demonstrate significant performance degradation in Table 8. + +Analysis of adaptation module sharing. We remove adaptation module sharing in AdaMix for ablation and keep four different copies of project-down and four project-up FFN layers. From Table 8 we observe the performance gap between AdaMix and AdaMix w/o sharing to increase with decrease in the dataset size demonstrating the importance of parameter sharing for low-resource tasks (e.g., + +
ModelMNLIRTEQQPSST2SubjMPQAAvg.
Full Prompt Fine-tuning*62.8 (2.6)66.1 (2.2)71.1 (1.5)91.5 (1.0)91.0 (0.5)82.7 (3.8)77.5
Head-only*54.1 (1.1)58.8 (2.6)56.7 (4.5)85.6 (1.0)82.1 (2.5)64.1 (2.1)66.9
BitFit*54.4 (1.3)59.8 (3.5)58.6 (4.4)87.3 (1.1)83.9 (2.3)65.8 (1.8)68.3
Prompt-tuning*47.3 (0.2)53.0 (0.6)39.9 (0.7)75.7 (1.7)51.5 (1.4)70.9 (2.4)56.4
Houlsby Adapter*35.7 (1.1)51.0 (3.0)62.8 (3.0)57.0 (6.2)83.2 (5.4)57.2 (3.5)57.8
LiST Adapter*62.4 (1.7)66.6 (3.9)71.2 (2.6)91.7 (1.0)90.9 (1.3)82.6 (2.0)77.6
AdaMix Adapter65.6 (2.6)69.6 (3.4)72.6 (1.2)91.8 (1.1)91.5 (2.0)84.7 (1.6)79.3
+ +Table 6: Average performance and standard deviation of several parameter-efficient fine-tuning strategies based on RoBERTa-large with $|\mathcal{K}| = 30$ training labels. The best performance is shown in **bold**. Prompt-tuning, Head-only and BitFit tune $1M$ model parameters during inference. Houlsby Adapter, LiST Adapter and AdaMix Adapter tune $14M$ model parameters. * denotes that the results are taken from (Wang et al., 2021). + +
Model#Param.Avg.
Full Fine-tuning110M82.7
AdaMix w/ Merging0.9M84.5
AdaMix w/o Merging + RandomRouting3.6M83.3
AdaMix w/o Merging + FixedRouting0.9M83.7
AdaMix w/o Merging + Ensemble3.6M83.2
+ +![](images/eba41bc4cff1bfc449d0455619c1788aeb267dd1844a68471a923cdd2e8b4d10.jpg) +Figure 4: Violin plot of AdaMix-RandomRouting performance distribution with RoBERTa-large encoders. Red dot denotes the performance of AdaMix. + +Table 7: AdaMix without adaptation merging and different routing and ensembling strategies. Average results are presented on GLUE development set with BERT-base encoder. Detailed task results in Table 14 of Appendix for BERT-base and RoBERTa-large encoders. + +
Model/# TrainMNLI 393kQNLI 108kSST2 67kMRPC 3.7kRTE 2.5k
Full Fine-tuning90.294.796.490.986.6
AdaMix90.995.497.191.989.2
w/o Consistency90.795.097.191.484.8
w/o Sharing90.995.096.490.484.1
+ +RTE, MRPC). This is further demonstrated in Figure 7 in Appendix which shows a faster convergence and lower training loss of AdaMix with shar + +ing compared to that without given the same number of training steps. We explore which adaptation module to share (project-up v.s. project-down) in Table 11 in Appendix that depict similar results. + +Impact of the number of adaptation modules. In this study, we vary the number of adaptation modules in AdaMix as 2, 4 and 8 during training. Table 9 shows diminishing returns on aggregate task performance with increasing number of modules. As we increase sparsity and the number of tunable parameters by increasing the number of adaptation modules, low-resource tasks like RTE and SST-2 – with limited amount of labeled data for fine-tuning – degrade in performance compared to high-resource tasks like MNLI and QNLI. + +Table 8: Ablation study demonstrating the impact of consistency regularization and sharing in AdaMix. + +
Adaptation ModuleMNLI 393kQNLI 108kSST2 67kMRPC 3.7kRTE 2.5k
290.995.296.890.987.4
4*90.995.497.191.989.2
890.995.396.991.487.4
+ +Table 9: Varying the number of adaptation modules in AdaMix with RoBERTa-large encoder. * denotes the number of modules used in AdaMix with adapters. + +Impact of adapter bottleneck dimension. Table 10 shows the impact of bottleneck dimension of adapters with different encoders in AdaMix. The model performance improves with increase in the number of trainable parameters by increasing the bottleneck dimension with diminishing returns after a certain point. + +# 4 Related Work + +Parameter-efficient fine-tuning of PLMs. Recent works on parameter-efficient fine-tuning (PEFT) can be roughly categorized into two categories: (1) tuning a subset of existing parameters including head fine-tuning (Lee et al., 2019), bias term + +
Adapter Dimension#Param.MNLI 393kQNLI 108kSST2 67kMRPC 3.7kRTE 2.5k
80.4M90.795.296.891.287.7
16*0.8M90.995.497.191.989.2
321.5M91.095.496.890.789.2
+ +Table 10: Varying the bottleneck dimension of adapters in AdaMix with RoBERTa-large encoder. * denotes the bottleneck dimension used in AdaMix with adapters. Results with BERT-base encoder in Table 12 in Appendix. + +tuning (Zaken et al., 2021), (2) tuning newly-introduced parameters including adapters (Houlsby et al., 2019; Pfeiffer et al., 2020), prompt-tuning (Lester et al., 2021), prefix-tuning (Li and Liang, 2021) and low-rank adaptation (Hu et al., 2021). As opposed to prior works operating on a single adaptation module, AdaMix introduces a mixture of adaptation modules with stochastic routing during training and adaptation module merging during inference to keep the same computational cost as with a single module. Further, AdaMix can be used on top of any PEFT method to further boost its performance. + +Mixture-of-Expert (MoE). Shazeer et al., 2017 introduced the MoE model with a single gating network with $Top - k$ routing and load balancing across experts. Fedus et al., 2021 propose initialization and training schemes for $Top - 1$ routing. Zuo et al., 2021 propose consistency regularization for random routing; Yang et al., 2021 propose $k$ Top-1 routing with expert-prototypes, and Roller et al., 2021; Lewis et al., 2021 address other load balancing issues. All the above works study sparse MoE with pre-training the entire model from scratch. In contrast, we study parameter-efficient adaptation of pre-trained language models by tuning only a very small number of sparse adapter parameters. + +Averaging model weights. Recent explorations (Szegedy et al., 2016; Matena and Raffel, 2021; Wortsman et al., 2022; Izmailov et al., 2018) study model aggregation by averaging all the model weights. (Matena and Raffel, 2021) propose to merge pre-trained language models which are fine-tuned on various text classification tasks. (Wortsman et al., 2022) explores averaging model weights from various independent runs on the same task with different hyper-parameter configurations. In contrast to the above works on full model finetuning, we focus on parameter-efficient fine-tuning. We explore weight averaging for merging weights of adaptation modules consisting of small tunable + +parameters that are updated during model tuning while keeping the large model parameters fixed. + +# 5 Conclusions + +We develop a new framework AdaMix for parameter-efficient fine-tuning (PEFT) of large pretrained language models (PLM). AdaMix leverages a mixture of adaptation modules to improve downstream task performance without increasing the computational cost (e.g., FLOPs, parameters) of the underlying adaptation method. We demonstrate AdaMix to work with and improve over different PEFT methods like adapters and low rank decompositions across NLU and NLG tasks. + +By tuning only $0.1 - 0.2\%$ of PLM parameters, AdaMix outperforms full model fine-tuning that updates all the model parameters as well as other state-of-the-art PEFT methods. + +# 6 Limitations + +The proposed AdaMix method is somewhat compute-intensive as it involves fine-tuning large-scale language models. The training cost of the proposed AdaMix is higher than standard PEFT methods since the training procedure involves multiple copies of adapters. Based on our empirical observation, the number of training iterations for AdaMix is usually between $1 \sim 2$ times the training for standard PEFT methods. This imposes negative impact on carbon footprint from training the described models. + +AdaMix is orthogonal to most of the existing parameter-efficient fine-tuning (PEFT) studies and is able to potentially improve the performance of any PEFT method. In this work, we explore two representative PEFT methods like adapter and LoRA but we did not experiment with other combinations like prompt-tuning and prefix-tuning. We leave those studies to future work. + +# 7 Acknowledgment + +The authors would like to thank the anonymous referees for their valuable comments and helpful suggestions and would like to thank Guoqing Zheng and Ruya Kang for their insightful comments on the project. This work is supported in part by the US National Science Foundation under grants NSF-IIS 1747614 and NSF-IIS-2141037. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. + +# References + +Armen Aghajanyan, Sonal Gupta, and Luke Zettlemoyer. 2021. Intrinsic dimensionality explains the effectiveness of language model fine-tuning. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 7319-7328, Online. Association for Computational Linguistics. +Roy Bar Haim, Ido Dagan, Bill Dolan, Lisa Ferro, Danilo Giampiccolo, Bernardo Magnini, and Idan Szpektor. 2006. The second PASCAL recognising textual entailment challenge. +Luisa Bentivogli, Peter Clark, Ido Dagan, and Danilo Giampiccolo. 2009. The fifth PASCAL recognizing textual entailment challenge. In TAC. +Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877-1901. Curran Associates, Inc. +Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The PASCAL recognising textual entailment challenge. In the First International Conference on Machine Learning Challenges: Evaluating Predictive Uncertainty Visual Object Classification, and Recognizing Textual Entailment. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Volume 1 (Long and Short Papers), pages 4171-4186. +William Fedus, Barret Zoph, and Noam Shazeer. 2021. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. arXiv preprint arXiv:2101.03961. +Jonathan Frankle, Gintare Karolina Dziugaite, Daniel Roy, and Michael Carbin. 2020. Linear mode connectivity and the lottery ticket hypothesis. In International Conference on Machine Learning, pages 3259-3269. PMLR. +Yarin Gal and Zoubin Ghahramani. 2015. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. CoRR, abs/1506.02142. + +Yarin Gal, Riashat Islam, and Zoubin Ghahramani. 2017. Deep Bayesian active learning with image data. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 1183-1192. PMLR. +Tianyu Gao, Adam Fisch, and Danqi Chen. 2021. Making pre-trained language models better few-shot learners. In Association for Computational Linguistics (ACL). +Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017. The webnlg challenge: Generating text from rdf data. In Proceedings of the 10th International Conference on Natural Language Generation, pages 124-133. +Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and Bill Dolan. 2007. The third PASCAL recognizing textual entailment challenge. In the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing. +Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for nlp. In International Conference on Machine Learning, pages 2790-2799. PMLR. +Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685. +Pavel Izmailov, Dmitrii Podoprikhin, Timur Garipov, Dmitry Vetrov, and Andrew Gordon Wilson. 2018. Averaging weights leads to wider optima and better generalization. arXiv preprint arXiv:1803.05407. +Jaejun Lee, Raphael Tang, and Jimmy Lin. 2019. What would elsa do? freezing layers during transformer fine-tuning. arXiv preprint arXiv:1911.03090. +Dmitry Lepikhin, HyoukJoong Lee, Yuanzhong Xu, Dehao Chen, Orhan First, Yanping Huang, Maxim Krikun, Noam Shazeer, and Zhifeng Chen. 2020. Gshard: Scaling giant models with conditional computation and automatic sharding. arXiv preprint arXiv:2006.16668. +Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. CoRR, abs/2104.08691. +Mike Lewis, Shruti Bhosale, Tim Dettmers, Naman Goyal, and Luke Zettlemoyer. 2021. Base layers: Simplifying training of large, sparse models. In ICML. +Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. CoRR, abs/2101.00190. + +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692. +Yuning Mao, Lambert Mathias, Rui Hou, Amjad Alma-hairi, Hao Ma, Jiawei Han, Wen-tau Yih, and Madian Khabsa. 2021. Unipelt: A unified framework for parameter-efficient language model tuning. arXiv preprint arXiv:2110.07577. +Michael Matena and Colin Raffel. 2021. Merging models with fisher-weighted averaging. arXiv preprint arXiv:2111.09832. +Linyong Nan, Dragomir Radev, Rui Zhang, Amrit Rau, Abhinand Sivaprasad, Chiachun Hsieh, Xiangru Tang, Aadit Vyas, Neha Verma, Pranav Krishna, et al. 2020. Dart: Open-domain structured data record to text generation. arXiv preprint arXiv:2007.02871. +Behnam Neyshabur, Hanie Sedghi, and Chiyuan Zhang. 2020. What is being transferred in transfer learning? Advances in neural information processing systems, 33:512-523. +Jekaterina Novikova, Ondrej Dušek, and Verena Rieser. 2017. The e2e dataset: New challenges for end-to-end generation. arXiv preprint arXiv:1706.09254. +Bo Pang and Lillian Lee. 2004. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. +Ethan Perez, Douwe Kiela, and Kyunghyun Cho. 2021. True few-shot learning with language models. arXiv preprint arXiv:2105.11447. +Jonas Pfeiffer, Aishwarya Kamath, Andreas Rückle, Kyunghyun Cho, and Iryna Gurevych. 2021. Adapterfusion: Non-destructive task composition for transfer learning. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 487-503. +Jonas Pfeiffer, Andreas Rückle, Clifton Poth, Aishwarya Kamath, Ivan Vulic, Sebastian Ruder, Kyunghyun Cho, and Iryna Gurevych. 2020. Adapterhub: A framework for adapting transformers. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020): Systems Demonstrations, pages 46-54, Online. Association for Computational Linguistics. +Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683. +Stephen Roller, Sainbayar Sukhbaatar, Arthur D. Szlam, and Jason Weston. 2021. Hash layers for large sparse models. ArXiv, abs/2106.04426. + +Thibault Sellam, Steve Yadowsky, Ian Tenney, Jason Wei, Naomi Saphra, Alexander D'Amour, Tal Linzen, Jasmijn Bastings, Iulia Raluca Turc, Jacob Eisenstein, Dipanjan Das, and Ellie Pavlick. 2022. The multiBERTs: BERT reproductions for robustness analysis. In International Conference on Learning Representations. +Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. 2017. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538. +Shaden Smith, Mostofa Patwary, Brandon Norick, Patrick LeGresley, Samyam Rajbhandari, Jared Casper, Zhun Liu, Shrimai Prabhumoye, George Zerveas, Vijay Korthikanti, et al. 2022. Using deep-speed and megatron to train megatron-turing nlg 530b, a large-scale generative language model. arXiv preprint arXiv:2201.11990. +Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. Recursive deep models for semantic compositionality over a sentiment treebank. +Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2818-2826. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30. +Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2019. GLUE: A multi-task benchmark and analysis platform for natural language understanding. +Yaqing Wang, Subhabrata Mukherjee, Xiaodong Liu, Jing Gao, Ahmed Hassan Awadallah, and Jianfeng Gao. 2021. List: Lite self-training makes efficient few-shot learners. arXiv preprint arXiv:2110.06274. +Janyce Wiebe, Theresa Wilson, and Claire Cardie. 2005. Annotating expressions of opinions and emotions in language. *Language resources and evaluation*, 39(2):165-210. +Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. +Mitchell Wortsman, Gabriel Ilharco, Samir Yitzhak Gadre, Rebecca Roelofs, Raphael Gontijo-Lopes, Ari S Morcos, Hongseok Namkoong, Ali Farhadi, Yair Carmon, Simon Kornblith, et al. 2022. Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time. arXiv preprint arXiv:2203.05482. + +An Yang, Junyang Lin, Rui Men, Chang Zhou, Le Jiang, Xianyan Jia, Ang Wang, Jie Zhang, Jiamang Wang, Yong Li, et al. 2021. M6-t: Exploring sparse expert models and beyond. arXiv preprint arXiv:2105.15082. +Elad Ben Zaken, Shauli Ravfogel, and Yoav Goldberg. 2021. Bitfit: Simple parameter-efficient fine-tuning for transformer-based masked language-models. arXiv preprint arXiv:2106.10199. +Tianyi Zhang, Felix Wu, Arzoo Katiyar, Kilian Q Weinberger, and Yoav Artzi. 2021. Revisiting few-sample BERT fine-tuning. +Simiao Zuo, Xiaodong Liu, Jian Jiao, Young Jin Kim, Hany Hassan, Ruofei Zhang, Tuo Zhao, and Jianfeng Gao. 2021. Taming sparsely activated transformer with stochastic experts. arXiv preprint arXiv:2110.04260. + +# Appendix + +# A Background + +# A.1 Mixture-of-Experts + +The objective of sparsely-activated model design is to support conditional computation and increase the parameter count of neural models like Transformers while keeping the floating point operations (FLOPs) for each input example constant. Mixture-of-Experts (MoE) Transformer models (Shazeer et al., 2017; Fedus et al., 2021; Lepikhin et al., 2020; Zuo et al., 2021) achieve this by using $N$ feed-forward networks (FFN), namely "experts" denoted as $\mathbb{E}_{i=1}^{N}$ , each with its own set of learnable weights that compute different representations of an input token $x$ based on context. In order to sparsify the network to keep the FLOPs constant, there is an additional gating network $\mathbb{G}$ whose output is a sparse $N$ -dimensional vector to route each token via a few of these experts. Note that, a sparse model with $N = 1$ corresponding to only one FFN layer in each Transformer block collapses to the traditional dense model. + +Consider $x_{s}$ as the input token representation in the $s^{th}$ position to the MOE layer comprising of the $\{\mathbb{E}\}_{i = 1}^{N}$ expert FFNs. Also, consider $w_{i}^{in}$ and $w_{i}^{out}$ to be the input and output projection matrices for $i^{th}$ expert. Expert output $\mathbb{E}_i(x_s)$ is given by: + +$$ +\mathbb {E} _ {i} \left(x _ {s}\right) = w _ {i} ^ {\text {o u t}} \cdot \operatorname {G e L U} \left(w _ {i} ^ {\text {i n}} \cdot x _ {s}\right) \tag {5} +$$ + +Consider $\mathbb{G}(x_s)$ to be output of the gating network. Output of the sparse MoE layer is given by: + +$$ +h \left(x _ {s}\right) = \sum_ {i} \mathbb {G} \left(x _ {s}\right) _ {i} \mathbb {E} _ {i} \left(x _ {s}\right) \tag {6} +$$ + +where $\mathbb{G}(x_s)_i$ the $i^{th}$ logit of the output of $\mathbb{G}(x_s)$ denotes the probability of selecting expert $\mathbb{E}_i$ . + +In order to keep the number of FLOPs in the sparse Transformer to be the same as that of a dense one, the gating mechanism can be constrained to route each token to only one expert FFN, i.e. $\sum_{i}\mathbb{G}_{t}(x_{s})_{i} = 1$ + +# A.2 Adapters + +The predominant methodology for task adaptation is to tune all of the trainable parameters of the PLMs for every task. This raises significant resource challenges both during training and deployment. A recent study (Aghajanyan et al., 2021) shows that PLMs have a low intrinsic dimension + +![](images/81c31d4fc792a2818efc042e43538d6c177c9897fcacbbe444322f2589b98a4c.jpg) +Figure 5: Conventional adapter design in standard Transformer architecture. + +that can match the performance of the full parameter space. + +To adapt PLMs for downstream tasks with a small number of parameters, adapters (Houlsby et al., 2019) have recently been introduced as an alternative approach for lightweight tuning. + +The adapter tuning strategy judiciously introduces new parameters into the original PLMs. During fine-tuning, only the adapter parameters are updated while keeping the remaining parameters of the PLM frozen. Adapters usually consist of two fully connected layers as shown in Figure 5, where the adapter layer uses a down projection $\mathcal{W}^{down} \in \mathcal{R}^{d \times r}$ to project input representation $x$ to a low-dimensional space $r$ (referred as the bottleneck dimension) with $d$ being the model dimension, followed by a nonlinear activation function $f(\cdot)$ , and a up-projection with $\mathcal{W}^{up} \in \mathcal{R}^{r \times d}$ to project the low-dimensional features back to the original dimension. The adapters are further surrounded by residual connections. + +Given the above adapter design with parameters $\psi$ , the dataset $\mathcal{D}_K$ , a pre-trained language model encoder enc with parameters $\Theta_{\mathrm{PLM}}$ , where $\Theta_{\mathrm{PLM}} \gg \psi$ , we want to perform the following optimization for efficient model adaptation: + +$$ +\psi \leftarrow \operatorname {a r g m i n} _ {\psi} \mathcal {L} \left(\mathcal {D} _ {k}; \Theta_ {\mathrm {P L M}}, \psi\right) \tag {7} +$$ + +# B Few-shot NLU Datasets + +Data. In contrast to the fully supervised setting in the above experiments, we also perform few-shot experiments following the prior study (Wang et al., 2021) on six tasks including MNLI (Williams et al., 2018), RTE (Dagan et al., 2005; Bar Haim et al., 2006; Giampiccolo et al., 2007; Bentivogli et al., 2009), $\mathrm{QQP}^1$ and SST-2 (Socher et al.). The results are reported on their development set fol + +lowing (Zhang et al., 2021). MPQA (Wiebe et al., 2005) and Subj (Pang and Lee, 2004) are used for polarity and subjectivity detection, where we follow (Gao et al., 2021) to keep 2,000 examples for testing. The few-shot model only has access to $|\mathcal{K}|$ labeled samples for any task. Following true few-shot learning setting (Perez et al., 2021; Wang et al., 2021), we do not use any additional validation set for any hyper-parameter tuning or early stopping. The performance of each model is reported after fixed number of training epochs. For a fair comparison, we use the same set of few-shot labeled instances for training as in (Wang et al., 2021). We train each model with 5 different seeds and report average performance with standard deviation across the runs. In the few-shot experiments, we follow (Wang et al., 2021) to train AdaMix via the prompt-based fine-tuning strategy. In contrast to (Wang et al., 2021), we do not use any unlabeled data. + +# C Ablation Study + +
ModelMNLI AccSST2 Acc
Sharing Project-up90.997.1
Sharing Project-down90.897.1
+ +Table 11: Ablation study demonstrating the impact of parameter sharing in AdaMix adapter framework. + +
Adapter Dim#Param.MNLI 393kQNLI 108kSST2 67kMRPC 3.7kRTE 2.5k
BERTBASE
80.1M82.291.192.287.372.6
160.3M83.091.592.288.272.9
320.6M83.691.392.288.573.6
48*0.9M84.791.592.489.574.7
641.2M84.491.892.388.275.1
RoBERTaLARGE
80.4M90.795.296.891.287.7
16*0.8M90.995.497.191.989.2
321.5M91.095.496.890.789.2
+ +Table 12: Varying the bottleneck dimension of adapters in AdaMix with BERT-base and RoBERTa-large encoder. * denotes the bottleneck dimension used in AdaMix with adapters. + +# D Detailed Results on NLU Tasks + +The results on NLU tasks are included in Table 1 and Table 13. The performance AdaMix with + +RoBERTa-large encoder achieves the best performance in terms of different task metrics in the GLUE benchmark. AdaMix with adapters is the only PEFT method which outperforms full model fine-tuning on all the tasks and on average score. Additionally, the improvement brought by AdaMix is more significant with BERT-base as the encoder, demonstrating $2.2\%$ and $1.2\%$ improvement over the performance of full model fine-tuning and the best performing baseline UNIPELT with BERT-base. The improvement is observed to be consistent as that with RoBERTa-large on every task. The NLG results are included in Table 4 and 5. + +# E Hyper-parameter + +Detailed hyper-parameter configuration for different tasks presented in Table 15 and Table 16. + +
Model#Param.MNLI AccQNLI AccSST2 AccQQP Acc /F1MRPC Acc/F1CoLA MccRTE AccSTS-B PearsonAvg.
Full Fine-tuning†110M83.290.091.6-/87.4-/90.962.166.489.882.7
Houlsby Adapter†0.9M83.190.691.9-/86.8-/89.961.571.888.683.0
BitFit°0.1M81.490.292.1-/84.0-/90.458.872.389.282.3
Prefix-tuning†0.2M81.290.490.9-/83.3-/91.355.476.987.282.1
LoRA†0.3M82.589.991.5-/86.0-/90.060.571.585.782.2
UNIPELT (AP)†1.1M83.490.891.9-/86.7-/90.361.271.888.983.1
UNIPELT (APL)†1.4M83.990.591.585.5-/90.258.673.788.983.5
AdaMix Adapter0.9M84.791.592.490.7/ 87.689.5/ 92.462.974.789.984.5
+ +Table 13: Main results on GLUE development set with BERT-base encoder. The best result on each task is in bold and “-” denotes the missing measure. $\dagger$ and $\diamond$ denote that the reported results are taken from (Mao et al., 2021; Zaken et al., 2021). The average performance is calculated based on F1 of QQP and MRPC. #Param. refers to the number of updated parameters in the inference stage. + +![](images/de410361c0f4a721199915829a46bbc41fcb4d6a50279b1d86026a364207a5d7.jpg) +(a) BERT-base + +![](images/ded01605fe6006b14b10275c5a71dd34bb9b683c5bc16b6e3535781ed9ad8ef7.jpg) +(b) RoBERTa-large +Figure 6: Violin plot of AdaMix-RandomRouting performance distribution with BERT-base and RoBERTa-large encoders. Red dot denotes the performance of AdaMix. + +![](images/e2dcd0a0f64bbfd9800a80f483c87f2b7f9d1a974c997cd02deba51953de2a7a.jpg) +(a) MNLI + +![](images/b9bdca76a85d6f529b0a3c7e8e761411b1938cf5a85225ea6e44eefa4ead77ac.jpg) +(b)QNLI + +![](images/6b203161beed1879356393c430f1f1d90617edde2941b76b5952234183a907dd.jpg) +(c) SST2 +Figure 7: Convergence analysis demonstrating the impact of adapter sharing design in AdaMix. + +
Model#Param.MNLI AccQNLI AccSST2 AccQQP Acc /F1MRPC Acc/F1CoLA MccRTE AccSTS-B PearsonAvg.
BERTBASE
Full Fine-tuning110M83.290.091.6-/87.4-/90.962.166.489.882.7
AdaMix0.9M84.791.592.490.7/87.689.5/92.462.974.789.984.5
AdaMix-RandomRouting3.6M84.391.191.890.6/87.485.6/89.160.572.189.883.3
AdaMix-FixedRouting0.9M84.591.191.690.5/87.387.5/90.861.473.389.883.7
AdaMix-Ensemble3.6M84.391.291.690.5/87.485.9/89.459.472.189.883.2
RoBERTaLARGE
Full Fine-tuning355.0M90.294.796.492.2/-90.9/-68.086.692.488.9
AdaMix0.8M90.995.497.192.3/89.891.9/94.170.289.292.489.9
AdaMix-RandomRouting3.2M90.895.296.892.2/89.690.8/93.368.888.592.289.4
AdaMix-FixedRouting0.8M90.795.196.892.1/89.591.2/93.668.689.292.289.5
AdaMix-Ensemble3.2M90.995.397.092.2/89.791.0/93.569.389.192.489.7
+ +Table 14: Comparing the impact of different routing and ensembling strategies with AdaMix. Results are presented on GLUE development set with BERT-base and RoBERTa-large encoders. Average results are calculated following Table 1 and Table 2 for consistency. The best result on each task is in **bold** and “-” denotes the missing measure. + +
TaskLearning rateepochbatch sizewarmupweight decayadapter sizeadapter num
BERTBASE
MRPC4e-4100160.060.1484
CoLA5e-4100160.060.1484
SST4e-440640.060.1484
STS-B5e-480320.060.1484
QNLI4e-420640.060.1484
MNLI4e-440640.060.1484
QQP5e-460640.060.1484
RTE5e-480640.060.1484
RoBERTaLARGE
MRPC3e-460640.60.1164
CoLA3e-480640.60.1164
SST3e-420640.60.1164
STS-B3e-480640.60.1164
QNLI3e-420640.60.1164
MNLI3e-420640.60.1164
QQP5e-480640.60.1164
RTE5e-460640.60.1164
+ +Table 15: Hyperparameter configurations for GLUE tasks. + +
Task|epoch|warmup steps|adapter size|no. of experts
Adapter with Adamix
E2E NLG Challenge20200088
WebNLG25250088
DART20200088
LoRA with Adamix
E2E NLG Challenge202000-8
WebNLG252500-8
DART202000-8
+ +Table 16: Hyperparameter configurations for GPT-2 Medium on NLG tasks. We retain all other default training and generation specific hyper-parameters from LoRA (Hu et al., 2021). \ No newline at end of file diff --git a/adamixmixtureofadaptationsforparameterefficientmodeltuning/images.zip b/adamixmixtureofadaptationsforparameterefficientmodeltuning/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..fa7f00011666d1e13b6fd0e7eaaed65c207b4e0d --- /dev/null +++ b/adamixmixtureofadaptationsforparameterefficientmodeltuning/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5599f8fdb8566b31842aa2a686d61701cbb2a216084d6f893e355721f4cf17c8 +size 919428 diff --git a/adamixmixtureofadaptationsforparameterefficientmodeltuning/layout.json b/adamixmixtureofadaptationsforparameterefficientmodeltuning/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..d0889e97c08c2c3a4eb1e2169bfb6153a462c762 --- /dev/null +++ b/adamixmixtureofadaptationsforparameterefficientmodeltuning/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:556d326eaa4ba6d967b0523b9dd01cfa18608cf22855ada4b8bf9f15cc3a7f1d +size 512774 diff --git a/adaptersharetaskcorrelationmodelingwithadapterdifferentiation/3f8d551b-99cc-46c1-a866-b4c2c94b841d_content_list.json b/adaptersharetaskcorrelationmodelingwithadapterdifferentiation/3f8d551b-99cc-46c1-a866-b4c2c94b841d_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..5695656135592c532793bc10229a1d01ea21fd99 --- /dev/null +++ b/adaptersharetaskcorrelationmodelingwithadapterdifferentiation/3f8d551b-99cc-46c1-a866-b4c2c94b841d_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ef75e79c1cb8b1944d26e2cbf5b069be3180341ddc3fadec2e8ee2dd16b8ec06 +size 44948 diff --git a/adaptersharetaskcorrelationmodelingwithadapterdifferentiation/3f8d551b-99cc-46c1-a866-b4c2c94b841d_model.json b/adaptersharetaskcorrelationmodelingwithadapterdifferentiation/3f8d551b-99cc-46c1-a866-b4c2c94b841d_model.json new file mode 100644 index 0000000000000000000000000000000000000000..54486930f293e75c6bd1c5eb8ad102bd641ad939 --- /dev/null +++ b/adaptersharetaskcorrelationmodelingwithadapterdifferentiation/3f8d551b-99cc-46c1-a866-b4c2c94b841d_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:884185f1c05bcc765fc478af566a0c6d4c279aa7ac2fa9d7740b84db2766eae8 +size 53456 diff --git a/adaptersharetaskcorrelationmodelingwithadapterdifferentiation/3f8d551b-99cc-46c1-a866-b4c2c94b841d_origin.pdf b/adaptersharetaskcorrelationmodelingwithadapterdifferentiation/3f8d551b-99cc-46c1-a866-b4c2c94b841d_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..abee37e0c4535816a6dfb278b6b642f4dab3a41d --- /dev/null +++ b/adaptersharetaskcorrelationmodelingwithadapterdifferentiation/3f8d551b-99cc-46c1-a866-b4c2c94b841d_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8fa5426d204de829a2dd1ecbd396f51cc9d6d63825e18f21b3d6336ca9aa0a62 +size 770097 diff --git a/adaptersharetaskcorrelationmodelingwithadapterdifferentiation/full.md b/adaptersharetaskcorrelationmodelingwithadapterdifferentiation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..8c745091eba2ff611ca2f73c42278cba614a1a19 --- /dev/null +++ b/adaptersharetaskcorrelationmodelingwithadapterdifferentiation/full.md @@ -0,0 +1,187 @@ +# AdapterShare: Task Correlation Modeling with Adapter Differentiation + +Zhi Chen $^{1*}$ , Bei Chen $^{2}$ , Lu Chen $^{1}$ , Kai Yu $^{1}$ , Jian-Guang Lou $^{2}$ + +$^{1}$ X-LANCE Lab, Department of Computer Science and Engineering MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University $^{2}$ Microsoft Research Asia {zhenchi713, chenlusz, kai.yu}@sjtu.edu.cn, {beichen, jlou}@microsoft.com + +# Abstract + +Thanks to the development of pre-trained language models, multitask learning (MTL) methods have achieved great success in natural language understanding. However, current MTL methods pay more attention to task selection or model design to fuse as much knowledge as possible, while the intrinsic task correlation is often neglected. It is important to learn sharing strategies among multiple tasks rather than sharing everything. In this paper, we propose AdapterShare, an adapter differentiation method to explicitly model task correlation among multiple tasks. AdapterShare is automatically learned based on the gradients on tiny held-out validation data. Compared to single-task learning and fully shared MTL methods, our proposed method obtains obvious performance improvements. Compared to the existing MTL method AdapterFusion, AdapterShare achieves an absolute average improvement of 1.90 points on five dialogue understanding tasks and 2.33 points on NLU tasks. Our implementation is available at https://github.com/microsoft/ContextualSP. + +# 1 Introduction + +With the development of transformer-based pretrained language models (PLMs), natural language understanding (NLU) has made great progress as a downstream task. There are two main ways to leverage PLMs in NLU tasks. One is the fine-tuning method, which updates the pre-trained language model directly on a target task. The other one is adapters (Rebuffi et al., 2017; Houlsby et al., 2019), which introduces a small number of task-specific parameters on a fixed PLM. When training on the target task, only the introduced parameters are updated. Compared to the fine-tuning method, adapter is memory-efficient, since the introduced parameters are much less than those of the PLM. In this paper, we focus on the approach using adapters. + +![](images/7723a7442fe8aea8eb1dace6627b8a91ce0b77cbd78562e855ffdc2a778e2599.jpg) +Figure 1: The architecture of the adapters with task correlation modeling method. + +To transfer the knowledge of different tasks, Stickland and Murray (2019) proposed a multitask learning (MTL) method to update the weights of a shared adapter using the weighting of the objective functions of all target tasks. The shared adapter captures the common structure underlying all the target tasks. This is a typical multitask learning method based on an implicit assumption that all tasks benefit from each other, where all parameters of the adapter are shared during multitask training. In other words, the task correlation has not been modeled in the traditional MTL method. In this paper, we propose a robust adapter differentiation method, called AdapterShare, to model the correlation of all target tasks explicitly. As shown in Figure 1, during the multitask learning process, the sharing strategy of adapter at each PLM layer is automatically learned according to the adapter gradients on small-scale held-out validation data. The learned sharing strategy can be regarded as a discrete task correlation map. + +The closest work is AdapterFusion (Pfeiffer et al., 2021), which is a two-stage learning method. The first stage is to train task-wise adapters separately, and the second stage is to fuse all task-wise adapters with attention mechanism for each target task. The two-stage method is sensitive to the initialization of attention weights. Once there are two tasks that hurt each other, it is hard to assign zero to the corresponding adapter using soft attention mechanism. Compared to AdapterFusion, our proposed AdapterShare learns all the adapters and their task correlation simultaneously. We adopt a discrete format to represent task correlation, where at each PLM layer, every two tasks either share the adapter (1 in the task correlation map) or not (0 in the task correlation map). + +# 2 Problem Statement + +As discussed, the existing multitask learning methods tend to share all parameters. It assumes that all target tasks benefit from each other. However, in practice, it can be detrimental to assume correlation in a set of tasks and simply put them together for learning (Bonilla et al., 2007). In this paper, we propose an approach to learn task correlation automatically. The task correlation indicates that all the target tasks are clustered into several task groups. The tasks in the same task group share the parameters. We maintain the task correlation map at the granularity of each transformer layer of pretrained language models. With adapters training strategy, the learning process can be formalized as: + +$$ +\Phi_ {i} \leftarrow \operatorname {a r g m i n} \left(L _ {\Phi_ {i}} \left(D _ {i}; \Theta_ {0}, \Phi_ {i}\right)\right), \tag {1} +$$ + +where $\Theta_0$ is initialized parameters of PLM, $\Phi_i$ is the adapter parameters of $i$ -th task $t_i$ , $D_i$ is the annotated training samples of $i$ -th task and $L_{\Phi_i}(\cdot)$ is the loss function of target task. The adapters consists of adapter networks at all PLM layers: + +$$ +\Phi_ {i} = \left\{\Phi_ {i} ^ {1}, \Phi_ {i} ^ {2}, \dots , \Phi_ {i} ^ {L} \right\}, \tag {2} +$$ + +where $L$ is the layer number of PLM and $\Phi_i^l$ is the adapter parameters of $l$ -th PLM layer for the task group containing task $t_i$ . As mentioned, the task correlation is at layer granularity. If task $t_j$ is in the same task group as task $t_i$ at $l$ -th layer, the adapter parameters are shared between these two tasks, which means $\Phi_i^l = \Phi_j^l$ . The task group at $l$ -th PLM layer is defined by layer-wise task correlation map $M^l$ . For example, as shown in + +![](images/2755f4846546dffa6f92c49d57728324d028a66eec02b7115fa74557753b8b60.jpg) +Figure 2: Calculated inter-task and intra-task gradients on tiny task-wise held-out validation sets. + +Figure 1, there are two task groups: $G_1^l = G_2^l = \{t_1, t_2\}$ , $G_3^l = G_4^l = G_5^l = \{t_3, t_4, t_5\}$ according to the task correlation map $M^l$ , where $M^l(i, j) = 1$ means $t_i$ and $t_j$ is in the same group at $l$ -th layer. In the next section, we will introduce how to learn the layer-wise task correlation map. + +# 3 AdapterShare + +In this section, we first introduce the adopted task correlation learning method in general. Then we reveal the problem of existing neural differentiation algorithm and improve it in our proposed task correlation learning algorithm, AdapterShare. Note that in the following, all learnable parameters are adapters, while the parameters of PLM are fixed. + +# 3.1 Adapter Differentiation + +We model task correlation in a discrete format. The discrete task correlation map divides all the target tasks into several task groups. The tasks in the same task group benefit from each other. The main challenge is how to quantify the effects of two different tasks. Inspired by the parameter differentiation method (Wang and Zhang, 2021), we leverage interference degree as the effect metric. The interference degree of two tasks is the negative value of the inter-task gradient cosine similarity on the shared parameters. The inter-task gradient is calculated on tiny held-out validation data, which contains validation samples of all tasks. Formally, the interference degree of a task group is: + +$$ +\mathcal {I} \left(\Phi_ {i} ^ {l}; G _ {i} ^ {l}\right) = \max _ {t _ {i}, t _ {j} \in G _ {i} ^ {l}} - \frac {\overline {{\mathbf {g}}} _ {t _ {i}} ^ {l} \cdot \overline {{\mathbf {g}}} _ {t _ {j}} ^ {l}}{\| \overline {{\mathbf {g}}} _ {t _ {i}} ^ {l} \| * \| \overline {{\mathbf {g}}} _ {t _ {j}} ^ {l} \|}, \tag {3} +$$ + +$$ +\overline {{\mathbf {g}}} _ {t _ {i}} ^ {l} = \nabla L _ {\Phi_ {i} ^ {l}} \left(H _ {i}; \Theta_ {0}, \Phi_ {i} ^ {l}\right), \tag {4} +$$ + +where $\bar{\mathbf{g}}_{t_i}^l$ is the inter-task gradient of shared adapter in task group $G_{i}^{l}$ , calculated on the held-out validation data $H_{i}$ of task $t_i$ . The inter-task gradient $\bar{\mathbf{g}}_{t_i}^l$ is accumulated gradient of all the samples in the held-out validation data of task $t_i$ . If the + +Algorithm 1: Task Correlation Learning +Set all the elements of task correlation maps to one: $\{M^l\}_{l = 1}^L$ +Initialize the adapter parameters: $\{\Phi_i^l\}_{l = 1}^L$ , where $\Phi_0^l = \dots = \Phi_N^l$ +// Prepare the data for $N$ tasks +Training dataset: $\{D_i\}_{i = 1}^N$ +Held-out validation dataset: $\{H_{i}\}_{i = 1}^{N}$ +// Training process of each epoch +for i in 1,2,...,N do 1.Sample a mini-batch $b_{i}$ from $D_{i}$ 2.Switch the adapters into $i$ -th task mode according to $\{M^l\}_{l = 1}^L:\Phi_i$ 3.Compute loss as Eq. 1 and Update $\Phi_{i}$ +// Detect adapter differentiation +for l in 1,2,...,L do Task group set: $\{G_i^l\}_{i = 1}^N$ for $G_{i}$ in $\{G_i^l\}_{i = 1}^N$ do for $t_i$ in $G_{i}$ do // Consistency of intra-task gradients 4.Split $H_{i}$ into $H_{i,0}$ and $H_{i,1}$ 5.Calculate $\overline{\mathbf{g}}_{t_i,0}^l$ and $\overline{\mathbf{g}}_{t_i,1}^l$ as Eq.4. 6.Calculate $\mathcal{C}(\Phi_i^l)$ as Eq.5. if all $\mathcal{C}(\Phi_i^l) > \alpha$ then 7.Calculate $\overline{\mathbf{g}}_{t_i}^l$ as Eq.6. 8.Calculate $\mathcal{I}(\Phi_i^l;G_i^l)$ as Eq.3. if any $\mathcal{I}(\Phi_i^l;G_i^l) > 0$ then 9.Adapter differentiation. 10.Update $M^l$ + +interference degree $\mathcal{I}(\Phi_i^l; G_i^l) > 0$ , it indicates that there are at least two tasks in this task group that have conflicting optimum directions. For example, as shown in Figure 2, $\overline{\mathbf{g}}_{t_1}^l$ and $\overline{\mathbf{g}}_{t_2}^l$ have similar global optimum directions, while $\overline{\mathbf{g}}_{t_3}^l$ has the opposite direction to the other two tasks. It suggests that $t_3$ may hinder the other two tasks $t_1$ and $t_2$ . These three tasks need to be divided into two different groups: $G_1^l = G_2^l = \{t_1, t_2\}$ and $G_3^l = \{t_3\}$ . The dividing process is named adapter differentiation, where one task group is split into two subgroups. In detail, adapter differentiation has three steps: 1) The two tasks with the highest interference degree are taken as representatives and put into two different subgroups; 2) Every other task in the current task group is compared with these two representatives and added to the subgroup with the lower interference degree; 3) The parameters of two differentiated adapters are copied from the original adapter. The elements in the task correlation map $M^l$ will change from 1 to 0, if two tasks belong to different task groups. + +At the beginning of the training process, we set all elements of the task correlation map to 1, which means that all adapter parameters are shared among + +
Corpora#SampleI(Token)I(Turn)O(Token)Task
SAMSUM (2019)14732104.9511.220.31DS
TASK (2019)220534.922.810.84DC
BANK77 (2020)1208121.6413.14ID
RES8K (2020)1527014.4413.38SF
WOZ2.0 (2017)760878.964.61.30DST
+ +Table 1: Statistics of five dialogue understanding datasets. $\mathbf{I}_{(\mathrm{Token})}$ and $\mathbf{I}_{(\mathrm{Turn})}$ mean the average length of the split tokens and the average turns of the input dialogue content. $\mathbf{O}_{(\mathrm{Token})}$ means the average length of the split tokens of the task-specific output. + +
Corpora#Train#Dev.#Test#LabelTask
WNLI (2012)634711462NLI
RTE (2018)250027630002NLI
CoLA (2019)8500100010002ACC
SST-2 (2013)6700087218002SEN
STSB (2017)7000150014001SIM
+ +Table 2: Statistics of five natural language understanding datasets. + +all tasks. Then, we periodically calculate the interference degree of the current task groups to activate the adapter differentiation operation when the interference degree is greater than 0. Once adapter differentiation starts, the task correlation map will be permanently changed. + +# 3.2 Avoiding Over-Differentiation + +So far, we have introduced the basic adapter differentiation method for learning task correlation. However, in practice, we find a problem called over-differentiation: the basic adapter differentiation method has an unstable training process, in which the update of the task correlation map is irreversible. At the beginning of the training process, the shared adapter parameters are fragile and the inter-task gradients have a big bias on the held-out validation data. Thus, the adapter differentiation operation needs to be cautious. In our proposed AdapterShare, we add another line of defense to activate the differentiation. We have to make sure that the inter-task gradient is trusted. As shown in Figure 2, we can see that each inter-task gradient is accumulated by intra-task gradients, while the intra-task gradients vary within a task. + +To alleviate this issue, we randomly split all the intra-task gradients into two groups and calculate the accumulated intra-task gradients of these two groups: $\overline{\mathbf{g}}_{t_i,0}^l$ and $\overline{\mathbf{g}}_{t_i,1}^l$ . Then, we use their cosine + +
DU Tasks (T5)Methods
STMTAdapterFusionAdapterShare
SAMSUM (R-L)48.8047.7847.3649.12
TASK (BLEU)88.4589.5489.9290.20
BANK77 (ACC.)91.5889.2591.1093.15
REST8K (F1)97.2896.4195.9397.58
WOZ2.0 (JGA)91.2590.7089.1292.89
OVERALL83.4782.7482.6984.59
+ +Table 3: Results on five dialogue understanding tasks with the backbone T5. + +
NU Tasks(BERT)Methods
STMTAdapterFusionAdapterShare
WNLI (ACC.)56.3461.9756.3361.97
RTE (ACC.)66.0677.6170.7577.62
CoLA (MCC.)58.0259.0660.2360.64
SST-2 (ACC.)93.1292.6693.1292.77
STSB (Spearman)88.7889.2889.8888.96
OVERALL72.4676.1274.0676.39
+ +Table 4: Results on five natural language understanding tasks with the backbone BERT. + +similarity as the consistency of inter-task gradient, calculated as: + +$$ +\mathcal {C} \left(\Phi_ {i} ^ {l}\right) = \frac {\overline {{\mathbf {g}}} _ {t _ {i} , 0} ^ {l} \cdot \overline {{\mathbf {g}}} _ {t _ {i} , 1} ^ {l}}{\| \overline {{\mathbf {g}}} _ {t _ {i} , 0} ^ {l} \| * \| \overline {{\mathbf {g}}} _ {t _ {i} , 1} ^ {l} \|.} \tag {5} +$$ + +The adapter differentiation on a task group can be activated only when all tasks in this task group have consistency values greater than the threshold $\alpha$ . The inter-task gradient of task $t_i$ is equal to the sum of two accumulated intra-task gradients, formalized as: + +$$ +\bar {\mathbf {g}} _ {t _ {i}} ^ {l} = \bar {\mathbf {g}} _ {t _ {i}, 0} ^ {l} + \bar {\mathbf {g}} _ {t _ {i}, 1} ^ {l}. \tag {6} +$$ + +To distinct with basic adapter differentiation method, we name the improved method as robust adapter differentiation. The details of task correlation learning are shown in Algorithm 1. + +# 4 Experiments + +# 4.1 Datasets + +We evaluate our proposed AdapterShare on five dialog understanding (DU) datasets (shown in Table 1) and five natural language understanding (NLU) datasets (shown in Table 2). There are five different dialog understanding tasks in DU datasets. DS, DC, ID, SF and DST represent dialogue summary, dialogue completion, intent detection, slot filling and dialogue state tracking, respectively. Five NLU + +datasets are chosen from GLUE benchmark, spanning four different NLU tasks. NLI, ACC, SEN and SIM indicate natural language inferencing, acceptability, sentiment and similarity, respectively. + +# 4.2 Experimental Setup + +In order to investigate the proposed AdapterShare training method, we compare it with ST, MT and AdapterFusion. ST trains a separate adapter for each target task. MT trains the adapters on all the target tasks (Stickland and Murray, 2019). AdapterFusion fuses the separated ST adapters on the target task with attention mechanism. + +As described in Su et al. (2022) and Chen et al. (2022), the dialogue understanding tasks can be formulated as a unified sequence-to-sequence generation task. For five DU tasks, we leverage T5-base model (Raffel et al., 2020) as the backbone of the generation model. For five NLU tasks, we implement all the experiments based on the released code by Liu et al. (2019). The backbone of NLU tasks is BERT-large (Kenton and Toutanova, 2019). The adapters is implemented based on AdapterHub (Pfeiffer et al., 2020), where the pre-trained language models are inherited from HuggingFace library (Wolf et al., 2019). We set the threshold of intra-task consistency $\alpha$ to $0.707\left(\cos \left(\pi /4\right)\right)$ . The learning rate is 1e-5. We conduct all the experiments on V100 GPU with 16G memory. All the metrics are the higher, the better. + +![](images/4323fb22b859d0ecf6db42f39a14ef486c60be38022e8b38032e402898c1e13d.jpg) +Figure 3: Differentiated adapters on 24 transformer layers of T5. X-axis represents the task name. Y-axis represents the number of shared tasks. + +# 4.3 Results + +The proposed AdapterShare adopts a robust adapter differentiation method to learn task correlation. As shown in Table 3, we can find that the proposed AdapterShare can get the best performance than the baselines. Compared with the single-task method, AdapterFusion method can not obtain any performance gain in encoder-decoder setup. In the encoder-only situation, AdapterFusion method can achieve the best performance on two of five tasks, as shown in Table 4. Compared with the single-task method, it actually gets obvious improvements, which is consistent with the original conclusion (Pfeiffer et al., 2021). However, in encoder-only setup, our proposed AdapterShare can still obtain the best performance on three of five tasks and get the best overall score. MT method shares all the parameters among all the tasks. In dialog understanding tasks, the overall score of ST is better than MT, which indicates that there are some tasks hurt by other tasks. The final results on DU tasks further indicate our proposed AdapterShare, which learns the task correlation map, is more efficient than independent training (ST) and complete-sharing methods. The final differentiation architecture on T5 is shown in Figure 3. The four shared tasks mean that all five tasks are shared with each other in the corresponding layer. We can see that the adapter differentiation happens only on T5 decoder side and all the adapters on encoder are shared. This phenomenon is interesting. As we know, inputs in all DU tasks are the dialogue context. The encoder module, as the presentation function, is used to represent the dialogue context. Compared with the encoder, the decoder needs to + +solve different DU tasks, whose outputs are very different. Various DU tasks need to pay attention to different dialogue context areas. For example, the DST task is more inclined to obtain the entity information mentioned by the user, and the intention detection is more inclined to pay attention to user actions. + +We also conduct an ablation study to compare robust adapter differentiation method with basic differentiation method on dialog understanding tasks. The performance curves on the development datasets are shown in Appendix A. It shows that the training process of the robust adapter differentiation method is more stable than the basic method. The metrics of robust method on DU tasks are also higher than the basic differentiation method. + +# 5 Conclusion + +In this paper, we propose a robust adapter differentiation method to automatically learn task correlation in the multitask learning setting. On both encoder-decoder and encoder-only PLMs, our proposed method can achieve exciting performance gains compared to the separated training, complete-sharing and AdapterFusion methods. In future work, we will try our method in the domain transfer area, which is a more general scenario than multitask learning. + +# Limitations + +There are two main limitations in this paper. The first one is about the scale of multiple tasks. In the experiments, there are five tasks on dialogue understanding area and natural language understanding area. It is unsure whether the proposed method works in a large-scale task learning setup. The second one is the implicit assumption included in our proposed method that the effect of two tasks are mutual, where one benefits/hurts the other means that the other also benefits/hurts itself. There is currently no evidence for the validity of this assumption. We leave these explorations for future work. + +# Ethical Considerations + +As our adapter differentiation methods are validated on the existing datasets, we follow the original copyright statements of 10 datasets. All claims in this paper are based on the experimental results. No demographic or identity characteristics information is used in this paper. + +# References + +Edwin V Bonilla, Kian Chai, and Christopher Williams. 2007. Multi-task gaussian process prediction. Advances in neural information processing systems, 20. +Inigo Casanueva, Tadas Temcinas, Daniela Gerz, Matthew Henderson, and Ivan Vulic. 2020. Efficient intent detection with dual sentence encoders. ACL 2020, page 38. +Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez-Gazpio, and Lucia Specia. 2017. Semeval-2017 task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation. arXiv preprint arXiv:1708.00055. +Zhi Chen, Lu Chen, Bei Chen, Libo Qin, Yuncong Liu, Su Zhu, Jian-Guang Lou, and Kai Yu. 2022. UniDU: Towards a unified generative dialogue understanding framework. In Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 442-455, Edinburgh, UK. Association for Computational Linguistics. +Samuel Coope, Tyler Farghly, Daniela Gerz, Ivan Vulic, and Matthew Henderson. 2020. Span-convert: Few-shot span extraction for dialog with pretrained conversational representations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 107-121. +Bogdan Gliwa, Iwona Mochol, Maciej Biesek, and Aleksander Wawer. 2019. Samsum corpus: A human-annotated dialogue dataset for abstractive summarization. EMNLP-IJCNLP 2019, page 70. +Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for nlp. In International Conference on Machine Learning, pages 2790-2799. PMLR. +Jacob Devlin Ming-Wei Chang Kenton and Lee Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL-HLT, pages 4171-4186. +Hector Levesque, Ernest Davis, and Leora Morgenstern. 2012. The winograd schema challenge. In Thirteenth international conference on the principles of knowledge representation and reasoning. +Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019. Multi-task deep neural networks for natural language understanding. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4487-4496. +Jonas Pfeiffer, Aishwarya Kamath, Andreas Rückle, Kyunghyun Cho, and Iryna Gurevych. 2021. Adapterfusion: Non-destructive task composition for transfer learning. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 487-503. + +Jonas Pfeiffer, Andreas Rückle, Clifton Poth, Aishwarya Kamath, Ivan Vulic, Sebastian Ruder, Kyunghyun Cho, and Iryna Gurevych. 2020. Adapterhub: A framework for adapting transformers. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020): Systems Demonstrations, pages 46-54, Online. Association for Computational Linguistics. +Jun Quan, Deyi Xiong, Bonnie Webber, and Changjian Hu. 2019. Gecor: An end-to-end generative ellipsis and co-reference resolution model for task-oriented dialogue. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4547-4557. +Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21:1-67. +Sylvestre-Alvise Rebuffi, Hakan Bilen, and Andrea Vedaldi. 2017. Learning multiple visual domains with residual adapters. Advances in neural information processing systems, 30. +Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1631-1642. +Asa Cooper Stickland and Iain Murray. 2019. Bert and pals: Projected attention layers for efficient adaptation in multi-task learning. In International Conference on Machine Learning, pages 5986-5995. PMLR. +Yixuan Su, Lei Shu, Elman Mansimov, Arshit Gupta, Deng Cai, Yi-An Lai, and Yi Zhang. 2022. Multi-task pre-training for plug-and-play task-oriented dialogue system. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4661-4676. +Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461. +Qian Wang and Jiajun Zhang. 2021. Parameter differentiation based multilingual neural machine translation. arXiv preprint arXiv:2112.13619. +Alex Warstadt, Amanpreet Singh, and Samuel Bowman. 2019. Neural network acceptability judgments. Transactions of the Association for Computational Linguistics, 7:625-641. + +Tsung-Hsien Wen, David Vandyke, Nikola Mrkšić, Milica Gasic, Lina M Rojas Barahona, Pei-Hao Su, Stefan Ultes, and Steve Young. 2017. A network-based end-to-end trainable task-oriented dialogue system. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 438-449. + +Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2019. Huggingface's transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771. + +# A Ablation Study on DU Tasks + +![](images/4732675ce9d63f89358256c0ca91cea7ff079ca32cee08288b46163a727da4eb.jpg) + +![](images/6417e87ac2f42a61a52bf49cb0b4e68140fc6bae48cefac08ac8f4e59ab5d223.jpg) +(a) Basic adapter differentiation. +(b) Robust adapter differentiation. +Figure 4: The performance curves on five dialogue understanding tasks with (a) basic adapter differentiation and (b) robust adapter differentiation methods. \ No newline at end of file diff --git a/adaptersharetaskcorrelationmodelingwithadapterdifferentiation/images.zip b/adaptersharetaskcorrelationmodelingwithadapterdifferentiation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..63313357dbc6f60edca9c624f79ef1fc074489e7 --- /dev/null +++ b/adaptersharetaskcorrelationmodelingwithadapterdifferentiation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1e90275414bde31c50b3a2ce1e64903dd6c418bf444f45d7ac999de67a4d7cb8 +size 319746 diff --git a/adaptersharetaskcorrelationmodelingwithadapterdifferentiation/layout.json b/adaptersharetaskcorrelationmodelingwithadapterdifferentiation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..196306fd678b354dc76ac4e7cb9f44818eb942d2 --- /dev/null +++ b/adaptersharetaskcorrelationmodelingwithadapterdifferentiation/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8a1962f6771113064b10b4f353d2ff172a7d2fc259c3a6557df0110dc2033c7c +size 241544 diff --git a/adaptingalanguagemodelwhilepreservingitsgeneralknowledge/3daf2795-b1f5-40a0-84ed-04f630dbdc4f_content_list.json b/adaptingalanguagemodelwhilepreservingitsgeneralknowledge/3daf2795-b1f5-40a0-84ed-04f630dbdc4f_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..ce5fca78cb2a0a46c5b1cabde0160ce1ff6d50d2 --- /dev/null +++ b/adaptingalanguagemodelwhilepreservingitsgeneralknowledge/3daf2795-b1f5-40a0-84ed-04f630dbdc4f_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b0bacc12071113ef6012bfeb660c887649330e3b8af0436f0cdf4da593a38a34 +size 97744 diff --git a/adaptingalanguagemodelwhilepreservingitsgeneralknowledge/3daf2795-b1f5-40a0-84ed-04f630dbdc4f_model.json b/adaptingalanguagemodelwhilepreservingitsgeneralknowledge/3daf2795-b1f5-40a0-84ed-04f630dbdc4f_model.json new file mode 100644 index 0000000000000000000000000000000000000000..21f61cdfcb9a583ef891545cd65f57cd0a8fefc4 --- /dev/null +++ b/adaptingalanguagemodelwhilepreservingitsgeneralknowledge/3daf2795-b1f5-40a0-84ed-04f630dbdc4f_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0c2f753741902f1c91939ee4cef1269252a11526ecfca2ad5a98dc23ff429962 +size 117837 diff --git a/adaptingalanguagemodelwhilepreservingitsgeneralknowledge/3daf2795-b1f5-40a0-84ed-04f630dbdc4f_origin.pdf b/adaptingalanguagemodelwhilepreservingitsgeneralknowledge/3daf2795-b1f5-40a0-84ed-04f630dbdc4f_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..8978d8acb68de06951709410f971c098b72783ff --- /dev/null +++ b/adaptingalanguagemodelwhilepreservingitsgeneralknowledge/3daf2795-b1f5-40a0-84ed-04f630dbdc4f_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:13f1f1356ef11824b696daaeb37baf750a93ab275b366255cb8960c37c26a388 +size 517148 diff --git a/adaptingalanguagemodelwhilepreservingitsgeneralknowledge/full.md b/adaptingalanguagemodelwhilepreservingitsgeneralknowledge/full.md new file mode 100644 index 0000000000000000000000000000000000000000..0152c4b67e7656f1d71b2c030dc2854f2b12ef42 --- /dev/null +++ b/adaptingalanguagemodelwhilepreservingitsgeneralknowledge/full.md @@ -0,0 +1,385 @@ +# Adapting a Language Model While Preserving its General Knowledge + +Zixuan Ke $^{1}$ , Yijia Shao $^{2}$ , Haowei Lin $^{2}$ , Hu Xu $^{3}$ , Lei Shu $^{1*}$ and Bing Liu $^{1}$ + +$^{1}$ Department of Computer Science, University of Illinois at Chicago + +$^{2}$ Wangxuan Institute of Computer Technology, Peking University + +Meta AI + +$^{1}\{zke4, liub\} @uic.edu$ + +$^{2}$ shaoyj, linhaowei}@pku.edu.cn + +$^{3}$ huxu@fb.com + +# Abstract + +Domain-adaptive pre-training (or DA-training for short), also known as post-training, aims to train a pre-trained general-purpose language model (LM) using an unlabeled corpus of a particular domain to adapt the LM so that endtasks in the domain can give improved performances. However, existing DA-training methods are in some sense blind as they do not explicitly identify what knowledge in the LM should be preserved and what should be changed by the domain corpus. This paper shows that the existing methods are suboptimal and proposes a novel method to perform a more informed adaptation of the knowledge in the LM by (1) soft-masking the attention heads based on their importance to best preserve the general knowledge in the LM and (2) contrasting the representations of the general and the full (both general and domain knowledge) to learn an integrated representation with both general and domain-specific knowledge. Experimental results will demonstrate the effectiveness of the proposed approach. + +# 1 Introduction + +Pre-trained general-purpose language models (LMs) like BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), and GPT-3 (Brown et al., 2020) have become a standard component in almost all NLP applications. Researchers have also found that domain-adaptive pre-training (or DA-training for short) using an unlabeled corpus in a specific domain to adapt an LM can further improve the end-task performance in the domain (Gururangan et al., 2020; Xu et al., 2019a,b; Sun et al., 2019; Alsentzer et al., 2019). Note that domain-adaptive pre-training is also called post-training (Xu et al., 2019a). + +Existing DA-training methods simply apply the same pre-training objective, i.e., the mask language + +model (MLM) loss, to further train an LM using a domain corpus. These methods are sub-optimal because they do not explicitly identify what should be preserved and what should be updated in the LM by the domain corpus. + +This paper argues that a good DA-training method has two needs. On the one hand, the general language knowledge learned in the LM should be preserved as much as possible because the target domain data is typically not large enough to be sufficient to learn the general knowledge well. For example, some words and their contexts may appear infrequently in a particular domain. The knowledge about them cannot be learned accurately based on the domain data alone. When these words and contexts appear in an end-task, the system will have difficulties. Thus, we need to rely on the knowledge about them in the LM. Since existing DA-training updates the LM with little guidance, such useful general knowledge may be corrupted. On the other hand, due to polysemy (same word with different meanings in different domains) and the fact that different domains also have their special word usages and contexts, the LM should be specialized or adapted to the target domain. A good DA-training should balance these two needs to adapt the LM to the target domain with minimal corruption to the good general knowledge in the LM. + +This paper proposes a novel technique to enable a more informed adaptation to (1) preserve the general knowledge in the LM as much as possible, and (2) update the LM to incorporate the domain-specific knowledge of the target domain as needed. The focus of the existing DA-training research has been on (2). As we argued above, (1) is also important as focusing only on (2) may destroy some useful general knowledge and produce sub-optimal results for end-tasks. To achieve (1), the system should constrain the gradient update of each attention head based on its importance to the general + +knowledge so that the general knowledge in LM can be preserved as much as possible. With (1), (2) will be able to change the part of the general knowledge that needs to be updated to adapt the LM to suit the target domain.3 + +In this paper, we propose a novel model called DGA (DA-training - General knowledge preservation and LM Adaptation) for the purpose. The key idea of the proposed method is to preserve the general language knowledge in the LM while adapting the LM to a specific domain. However, it is not obvious how this can be done, i.e., how to find those parameters that are important for the general knowledge and how to protect them. This paper proposes a novel proxy-based method to achieve the objectives. It works as follows. DGA first estimates the importance of each attention head in the LM via the newly proposed proxy KL-divergence loss (Sec. 3.1). This importance score reflects how important each attention head is to the general knowledge. Based on the importance scores, it performs two key functions: The first function uses the scores to soft-mask (rather than binary-mask or completely block) the gradient update to prevent important general knowledge in LM from being unnecessarily corrupted. This is related to pruning of unimportant attention heads (Michel et al., 2019). However, pruning is not directly applicable to DA-training as we will show in Sec. 2. The proposed soft-masking constrains only the backward gradient flow in training. It is not necessary to soft-mask the forward pass in either training or inference. This is important because using the knowledge in the full network encourages maximal integration of pre-trained general knowledge and the target domain-specific knowledge. The second function contrasts the representation for the general knowledge in the LM and the full (including both the general and the domain-specific) knowledge to learn an integrated representation (Sec. 3.2).4 + +In summary, this paper makes two key contributions. + +(1). It proposes the idea of informed adaptation to integrate the specialized knowledge in the target + +domain into the LM with minimal corruption to the useful general knowledge in the original LM. + +(2). It proposes a new model DGA with two novel functions to enable better DA-training. DGA estimates the attention head importance to protect the important general knowledge in the LM and integrates the specialized knowledge in the target domain into the LM through contrasting the general and the full knowledge. + +To the best of our knowledge, none of these has been reported in the literature before. + +Extensive experiments have been conducted in 6 different domains and on 10 baselines to demonstrate the effectiveness of the proposed DGA. + +# 2 Related Work + +Domain-adaptive pre-training (DA-training). Researchers have applied DA-training to many domains, e.g., reviews (Xu et al., 2019a,b), biomedical text (Lee et al., 2020), news and papers (Gururangan et al., 2020), and social media (Chakrabarty et al., 2019). However, they all use the same mask language model (MLM) loss. We argue that it is sub-optimal and it is also important to preserve the general knowledge in the LM as much as possible and integrate it with the target domain knowledge. + +Network pruning as importance computation. It is known that many parameters in a neural network are redundant and can be pruned (Li et al., 2021; Lai et al., 2021). This has also been shown for pre-trained Transformer (Chen et al., 2020a; Lin et al., 2020; Gao et al., 2021b; Michel et al., 2019; Voita et al., 2019). A popular pruning method is to discard the parameters with small absolute values (Han et al., 2015; Guo et al., 2016). Other methods prune the network at a higher level. In a Transformer-based model, these include pruning the attention head (Michel et al., 2019; Voita et al., 2019; McCarley et al., 2019) and pruning sub-layers in a standard Transformer layer (Fan et al., 2020; Sajjad et al., 2020). However, the above methods are not directly applicable to us as we need to compute the head importance for the LM using unlabeled domain data, while the above approaches are all for supervised end-tasks. We propose to use a proxy KL-divergence loss for our purpose. Note that it is possible to prune other sub-layers in the Transformer. However, as shown in Sec. 4.3, estimating the importance for other layers does not improve the performance. + +Contrastive learning. Contrastive learning (Chen + +et al., 2020b; He et al., 2020) can learn good representations by maximizing the similarity of positive pairs and minimizes that of negative pairs: + +$$ +\mathcal {L} _ {\text {c o n t r a s t}} = - \log \frac {e ^ {(\operatorname {s i m} \left(q _ {i} , q _ {i} ^ {+}\right) / \tau)}}{\sum_ {j = 1} ^ {N} e ^ {(\operatorname {s i m} \left(q _ {i} , q _ {j} ^ {+}\right) / \tau)}}, \tag {1} +$$ + +where $N$ is the batch size, $\tau$ is a temperature parameter, $\mathrm{sim}(\cdot)$ is a similarity metric, and $q_{i}$ and $q_{i}^{+}$ are representations for positive pairs $x_{i}$ and $x_{i}^{+}$ (typically, $x_{i}^{+}$ is an augmented sample of $x_{i}$ , e.g., generated via cropping, deletion or synonym replacement (Gao et al., 2021a)). In the unsupervised contrastive loss, the negative samples are the other samples in the batch, indicated in the denominator. + +We mainly use contrastive loss to contrast the representations of the important general knowledge in the original LM and the full knowledge (both the general and domain-specific knowledge) to achieve a good integration of the general knowledge and the domain specific knowledge. + +# 3 Proposed DGA System + +As discussed earlier, DGA goes beyond the MLM loss to perform two more functions: (1) preserving the important general knowledge in the LM by soft-masking the attention heads based on their importance. This helps avoid potential corruption of the general knowledge in the LM in DA-training (Sec. 3.1). However, the challenge is how to identify the general knowledge in the LM and how to protect it. We will propose a method to do that. (2) encouraging the model to learn integrated representations of the target domain and the general knowledge in the LM (Sec. 3.2). It is also not obvious how this can be done. We propose a contrastive learning based method to do it. Figure 1 gives an overview of DGA. + +# 3.1 Preserving General Knowledge by Soft-Masking Attention Heads + +Multi-head attention. Multi-head attention is arguably the most important component in the Transformer model (Vaswani et al., 2017). We omit details of other parts and refer the reader to the original paper. Formally, let $\boldsymbol{x} = x^{(1)},\dots,x^{(T)}$ be a sequence of $T$ real vectors where $x^{(t)}\in \mathbb{R}^d$ and let $q\in \mathbb{R}^d$ be a query vector. The attention mechanism is defined as + +$$ +\operatorname {a t t} (\boldsymbol {x}, q) = W _ {o} \sum_ {t = 1} ^ {T} \alpha^ {(t)} (q) W _ {v} x ^ {(t)}, \tag {2} +$$ + +![](images/4ea5ada12f80cc747591c8272d87dc41e0dd168377db9d707b2877ee6655be41.jpg) +Figure 1: Illustration of DGA. (A) shows the importance computation. This is done by adding a gate vector $\pmb{g}_l$ multiplying with the multi-head attention (Eq. 5) and averaging its training gradients (Eq. 6). (B) shows DGA training. In backward pass, attention heads are soft-masked based on their importance $\pmb{I}$ (Eqs. 9 and 10) to try to preserve the general knowledge in the LM as much as possible. In forward pass, the added gate vector is removed except for feature learning in the contrastive loss. The contrastive loss is computed by contrasting the general knowledge with importance $(\pmb{o}^{\mathrm{gen}}$ in Eq. 12) applied and the full knowledge without applying the importance $(\pmb{o}^{\mathrm{full}}$ in Eq. 14). The final objective of DGA consists of MLM loss and contrastive loss. Note that we omit the details of other parts of Transformer and only focus on the multi-head attention mechanism. + +![](images/14a436a5aa85c043d153a670afa121a57e1091ceb41a972d9c594726c35c20b5.jpg) + +where + +$$ +\alpha^ {(t)} (q) = \operatorname {s o f t m a x} \left(\frac {q ^ {T} W _ {q} ^ {T} W _ {k} x ^ {(t)}}{\sqrt {d}}\right). \tag {3} +$$ + +The projection matrices $W_{o}, W_{v}, W_{q}, W_{k} \in \mathbb{R}^{d \times d}$ are learnable parameters. The query vector is from the same sequence as $\pmb{x}$ in self-attention. A Transformer contains $L$ identical layers. For layer $l$ , $H_{l}$ different attention heads are applied in parallel to enable the Transformer to be trained on more data. Simply put, multi-head attention (mhatt) is the simultaneous application of multiple attention heads in a single Transformer architecture. They are then applied in parallel to obtain multi-head attention.[5] + +$$ +\operatorname {m h a t t} _ {l} (\boldsymbol {x}, q) = \sum_ {h = 1} ^ {H _ {l}} \operatorname {a t t} _ {l h} (\boldsymbol {x}, q), \tag {4} +$$ + +where $h$ indicates the $h^{th}$ attention head. Note that the input $x$ is different in each layer since the input of a given layer is the output of last layer. To ease the notation, we use the input $x$ for all layers. + +Head importance. Researchers have found that not all attention heads are important (Michel et al., 2019). We introduce a gate vector, $\pmb{g}_l$ , where each cell is a gate variable, $g_{lh}$ , to the attention head summation for detecting the importance of attention heads. The resulting importance scores are used to soft-mask the heads in DA-training. + +$$ +\operatorname {g m h a t t} _ {l} (\boldsymbol {x}, q) = \sum_ {h = 1} ^ {H _ {l}} g _ {l h}, \otimes \operatorname {a t t} _ {l h} (\boldsymbol {x}, q) \tag {5} +$$ + +where $\otimes$ is the element-wise multiplication. A gradient-based head importance detection method is proposed in (Michel et al., 2019). Given a dataset $D = \{(\pmb{y}_m, \pmb{x}_m)\}_{m=1}^M$ of $M$ samples ( $\pmb{y}_m$ is the label of $\pmb{x}_m$ as Michel et al. (2019) worked on supervised learning), the importance of a head is estimated with a gradient-based proxy score + +$$ +I _ {l h} = \frac {1}{M} \sum_ {m = 1} ^ {M} | \nabla_ {g _ {l h}} |, \tag {6} +$$ + +where $\nabla_{g_{lh}}$ is the gradient of gate variable $g_{lh}$ + +$$ +\nabla_ {g _ {l h}} = \frac {\partial \mathcal {L} _ {\text {i m p t}} \left(\boldsymbol {y} _ {m} , \boldsymbol {x} _ {m}\right)}{\partial g _ {l h}}, \tag {7} +$$ + +where $\mathcal{L}_{\mathrm{impt}}$ is a task-specific/domain-specific loss function. The gradient can be used as the importance score because changing $g_{lh}$ is liable to have a large effect on the model if $I_{lh}$ has a high value. + +Although Eq. 6 offers a way to compute the importance of attention heads w.r.t. a given loss $\mathcal{L}_{\mathrm{impt}}$ , we are unable to directly apply it: If we use the domain data at hand and the MLM loss as $\mathcal{L}_{\mathrm{impt}}$ , $\nabla_{g_{lh}}$ only indicates the importance score for domain-specific knowledge. However, our goal is to estimate the attention heads importance for the general knowledge in LM which requires the data used in training the LM to compute the $\mathcal{L}_{\mathrm{impt}}$ . In practice, such data is not accessible to users of the LM. Further, label is needed in Eq. 6 but our domain corpus is unlabeled in DA-training. To address these issues, we propose to compute a proxy KL-divergence loss for $\mathcal{L}_{\mathrm{impt}}$ . + +Proxy KL-divergence loss. We need a proxy for $\mathcal{L}_{\mathrm{impt}}$ such that its gradient $(\nabla_{g_{lh}})$ can be used to compute head importance without using the LM's original pre-training data. We propose to use model + +robustness as the proxy, i.e., we try to detect heads that are important for LM's robustness. Its gradient, $\nabla_{g_{lh}}$ , then indicates the robustness and thus the importance to the LM model. Our rationale is as follows: If an $I_{lh}$ (the average of $|\nabla_{g_{lh}}|$ , see Eq. 6) has a high value, it indicates that it is important to the LM's robustness because its change can cause the LM to change a great deal. It is thus an important head to the LM. In contrast, if $I_{lh}$ has a small value, it is a less or not important head to the LM. + +To compute the robustness of the LM, we take a subset (a hyper-parameter) of the target domain data $\{x_{m}^{\mathrm{sub}}\}$ (no label in DA-training) and input $x_{m}^{\mathrm{sub}}$ twice to the LM and compute the KL-divergence of the two resulting representations, + +$$ +\mathcal {L} _ {\text {i m p t}} = \mathrm {K L} \left(f _ {1} \left(\boldsymbol {x} _ {m} ^ {\text {s u b}}\right), f _ {2} \left(\boldsymbol {x} _ {m} ^ {\text {s u b}}\right)\right), \tag {8} +$$ + +where $f_{1}$ and $f_{2}$ are the LM with different dropout masks. Note that we don't need to add any additional dropouts to implement $f$ because independently sampled dropout masks are used as input in the Transformer. In training a Transformer, there are dropout masks placed on fully-connected layers and attention probabilities. Thus, simply feeding the same input to the Transformer twice will get two representations with different dropout masks. Since dropout is similar to adding noise, the difference between the two representations can be regarded as the robustness of the Transformer model. Figure 1 (A) shows how we compute the importance of each attention head using the gradient of the gate vector $g_{l}$ . + +Soft-masking attention heads in DA-training. Recall we want to preserve the general knowledge in the LM during DA-training using head importance $I_{lh}$ . Given the attention head $\mathrm{att}(x, q)$ and DA-training loss $\mathcal{L}_{\mathrm{DA - train}}$ (typically the MLM loss; we also propose an additional loss in Sec. 3.2), we can "soft mask" its corresponding gradient $(\nabla_{\mathrm{att}_{lh}})^6$ using the head importance value $I_{lh}$ , + +$$ +\nabla_ {\mathrm {a t t} _ {l h}} ^ {\prime} = \left(1 - I _ {l h} ^ {\text {n o r m}}\right) \otimes \nabla_ {\mathrm {a t t} _ {l h}}, \tag {9} +$$ + +where $I_{lh}^{\mathrm{norm}}$ is from $I_{lh}$ via normalization + +$$ +I _ {l h} ^ {\text {n o r m}} = \left| \operatorname {T a n h} \left(\operatorname {N o r m a l i z e} \left(I _ {l h}\right)\right) \right|. \tag {10} +$$ + +Normalize makes the $I_{lh}$ have a mean of 0 and standard deviation of 1. Absolute value of Tanh + +ensures that $I_{lh}$ takes values in the interval [0, 1]. Eq. 9 means to constrain the gradient of the corresponding head $\mathrm{att}_{lh}(x,q)$ by element-wise multiplying one minus the head importance to the gradient. It is "soft-masking" because $I_{lh}$ is a real number in [0, 1] (instead of binary {0, 1}), which gives the model the flexibility to adjust the attention head. This is useful because although some heads are important to the LM, they may conflict with the knowledge in the target domain and thus need adjusting. Also note that the soft masks here affect only the backward pass and are not used in forward pass (so that forward pass can use the full network and encourage maximal integration of pre-trained general and domain-specific knowledge) except for feature learning using contrastive learning (see below). Figure 1 (B) shows that attention heads are soft-masked during training. + +# 3.2 Contrasting General and Full Knowledge + +We now present how to integrate the general knowledge in the LM and the domain-specific knowledge in the target domain by contrasting the general knowledge and the full knowledge (both general and domain-specific). We first introduce how we obtain such knowledge from the LM for the input $\mathbf{x}$ , and then discuss how we contrast them. + +Obtaining the general knowledge for the input sequence $\pmb{x}$ from the LM is by extracting the representation of combining the attention heads and their importance scores ( $I_{lh}^{\mathrm{norm}}$ in Eq. 10) in the forward pass. The intuition is that since the importance scores show how important each attention head is to the general knowledge, the resulting representation reflects the main general knowledge used by $\pmb{x}$ . Formally, we plug $I_{lh}^{\mathrm{norm}}$ (soft-masks) as the gate variable $g_{lh}$ in Eq. 5, + +$$ +\operatorname {g m h a t t} _ {l} ^ {\text {g e n}} (\boldsymbol {x}, q) = \sum_ {h = 1} ^ {H _ {l}} I _ {l h} ^ {\text {n o r m}} \otimes \operatorname {a t t} _ {l h} (\boldsymbol {x}, q). \tag {11} +$$ + +Given the attention heads for general knowledge, we can plug it into the whole Transformer to obtain the final general knowledge (taking the average of each token's output in the input sequence). + +$$ +\boldsymbol {o} ^ {\text {g e n}} = \operatorname {T r a n s f o r m e r} \left(\operatorname {g m h a t t} ^ {\text {g e n}} (\boldsymbol {x}, q)\right). \tag {12} +$$ + +(See $o^{\mathrm{gen}}$ also in Figure 1 (B)). + +Obtaining the full (both general and domain-specific) knowledge in $x$ is similar. The only difference is that we extract the representation of $x$ + +without applying the importance (soft-masks) on attention heads in the forward pass, + +$$ +\operatorname {g m h a t t} _ {l} ^ {\text {f u l l}} (\boldsymbol {x}, q) = \sum_ {h = 1} ^ {H _ {l}} \operatorname {a t t} _ {l h} (\boldsymbol {x}, q). \tag {13} +$$ + +Similarly, we can plug it into the Transformer, + +$$ +\boldsymbol {o} ^ {\text {f u l l}} = \operatorname {T r a n s f o r m e r} \left(\operatorname {g m h a t t} ^ {\text {f u l l}} (\boldsymbol {x}, q)\right). \tag {14} +$$ + +(See $o^{\mathrm{full}}$ also in Figure 1 (B)). Note that it is possible to use $(1 - I_{lh}^{\mathrm{norm}})$ as the importance of domain-specific knowledge and contrast it with the general knowledge. However, this produces poorer results (see Table 3) as explained in footnote 4. + +Contrasting general and full knowledge. It is known that contrastive learning helps learn a good isotropic representation that is good for downstream tasks, with the help of positive and negative instances. We contrast the general $(o^{\mathrm{gen}})$ and full $(o^{\mathrm{full}})$ representations (as positive and negative instances) for the same input $x$ to make them different, which encourages the learning of domain-specific knowledge in $o^{\mathrm{full}}$ that is not already in the general knowledge and yet related to and integrated with the general knowledge $(o^{\mathrm{gen}})$ of the input. + +We construct contrastive instances as follows: for an input $\pmb{x}_m$ , three contrastive instances are produced. Anchor $\pmb{o}_m$ and positive instance $o_m^+$ are both full knowledge from Eq. 14, obtained based on two independently sampled dropout masks in the Transformer (recall that this can be achieved by inputting $x_m$ twice (see Sec. 3.1). We regard $o_m^+$ and $o_m$ as positive instances because the dropout noise has been shown to be good positive instances for improving alignment in training sentence embedding (Gao et al., 2021a). Negative instance $o_m^-$ is the general knowledge for $x_m$ from the LM obtained via Eq. 12. With $o_m$ , $o_m^+$ , and $o_m^-$ , our contrastive loss is $(\mathrm{sim}(\cdot))$ is the cosine similarity), + +$$ +\mathcal {L} _ {\text {c o n t r a s t}} = - \log \frac {e ^ {\sin \left(\boldsymbol {o} _ {m} , \boldsymbol {o} _ {m} ^ {+}\right)} / \tau}{\sum_ {j = 1} ^ {N} \left(e ^ {\sin \left(\boldsymbol {o} _ {m} , \boldsymbol {o} _ {j} ^ {+}\right) / \tau} + e ^ {\sin \left(\boldsymbol {o} _ {m} , \boldsymbol {o} _ {j} ^ {-}\right) / \tau}\right)}. \tag {15} +$$ + +Compared to Eq. 1, the second term is added in the denominator, i.e., general knowledge representations as additional negative samples/instances. Figure 1 (B) shows a red arrow pointed from $o^{\mathrm{full}}$ to itself, indicating the positive instances are from inputting twice. The dashed red arrow pointing to $o^{\mathrm{gen}}$ indicates the negative instances contrasting the specialized and general knowledge. + +
Unlabeled Domain DatasetsEnd-Task Classification Datasets
SourceDatasetSizeDatasetTask#Training#Testing#Classes
ReviewsYelp Restaurant758MBRestaurantAspect Sentiment Classification (ASC)3,4521,1203
Amazon Phone724MBPhoneAspect Sentiment Classification (ASC)2395532
Amazon Camera319MBCameraAspect Sentiment Classification (ASC)2306262
Academic PapersACL Papers867MBACLCitation Intent Classification1,5204216
AI Papers507MBAIRelation Classification2,2602,3887
PubMed Papers989MBPubMedChemical-protein Interaction Prediction2,6677,39813
+ +Table 1: Statistics for domain post-training datasets and end task supervised classification datasets (more detail of each task is given in Appendix A). + +# 3.3 DGA Objectives + +DGA is a pipelined model: First, a subset of the domain data is used to estimate the attention head importance $(I_{lh}$ in Sec. 3.1). Second, given the attention head importance, we compute the final domain-adaptive loss by combining the conventional Masked Language Model (MLM) loss (include the proposed soft-masking for general knowledge) and the proposed contrastive loss: + +$$ +\mathcal {L} _ {\mathrm {D A - t r a i n}} = \mathcal {L} _ {\mathrm {M L M}} + \lambda_ {1} \mathcal {L} _ {\text {c o n t r a s t}}, \tag {16} +$$ + +where $\lambda_{1}$ is the hyper-parameter to adjust the impact of the added term. + +# 4 Experiments + +We follow the experiment setup in (Gururangan et al., 2020). RoBERTa (Liu et al., 2019)7 is used as the LM. In each experiment, we first DA-train the LM and then fine-tune it on the end-task. The final evaluation is based on the end-task results. + +# 4.1 Datasets and Baselines + +Datasets: Table 1 shows the statistics of the unlabeled domain datasets for DA-training and their corresponding end-task classification datasets. We use 6 unlabeled domain datasets:3 of them are about reviews: Yelp Restaurant (Xu et al., 2019a), Amazon Phone (Ni et al., 2019), Amazon Camera (Ni et al., 2019); 3 of them are academic papers: ACL Papers (Lo et al., 2020), AI Papers (Lo et al., 2020), and PubMed Papers. Each unlabeled domain dataset has a corresponding end-task classify + +cation dataset10: Restaurant11 (Xu et al., 2019a), Phone (Ding et al., 2008; Hu and Liu, 2004), Camera (Ding et al., 2008; Hu and Liu, 2004)12, ACL (ACL-ARC in Jurgens et al. (2018)), AI (SCIERC in Luan et al. (2018)), and PubMed (CHEMPORT in Kringelum et al. (2016)). + +# Baselines. We consider 10 baselines. + +(1). Non-DA-training (RoBERTa) (Liu et al., 2019) uses the original RoBERTa for the end-task fine-tuning without any DA-training. + +(2). DA-training using masked language model loss (MLM) is the existing DA-training method. To our knowledge, existing DA-training systems are all based on the MLM loss. + +(3). DA-training using adapter-tuning (MLM (Adapter)) adds adapter layers between layers of Transformer for DA-training. An adapter (Houlsby et al., 2019) has two fully connected layers and a skip connection. During DA-training, the Transformer is fixed, only the adapters are trained. The bottleneck (adapter) size is set to 64 (Houlsby et al., 2019). During end-task fine-tuning, both RoBERTa and adapters are trainable for fair comparison. + +(4). DA-training using prompt-tuning (MLM (Prompt)) (Lester et al., 2021) adds a sequence of prompt tokens to the end of the original sequence. In DA-training, RoBERTa (the LM) is fixed and only the prompt tokens are trained. In end-task fine-tuning, both LM and the trained prompt are trainable. We initialize 100 tokens and set the learning rate of the prompt token to 0.3 in DA-training, following the setting in Lester et al. (2021). + +# (5). Knowledge distillation (MLM+KD) (Hin- + +ton et al., 2015) minimizes the representational deviation between the general knowledge in LM and the specialized knowledge in DA-training. We compute the KL divergence between the representations (the output before the masked language model prediction head) of each word of the two models (LM and DA-trained) as the distillation loss. + +(6). Adapted distillation through attention (MLM+AdaptedDeiT) is derived from DeiT (Touvron et al., 2021), a distillation method for visual Transformer (ViT) (Dosovitskiy et al., 2020). We adapt DeiT to a text-based and unsupervised model by distilling the LM representation13 to the added distillation token and change ViT to RoBERTa. + +(7, 8). DA-training using sequence-level contrastive learning (MLM+SimCSE and MLM+InfoWord). SimCSE is a contrastive learning method for sentence embedding (Gao et al., 2021a). We use its unsupervised version where positive samples are from the same input with different dropout masks and negative samples are other instances in the same batch. InfoWord (Kong et al., 2020) is another contrastive learning method contrasts the span-level local representation and sequence-level global representation. + +(9, 10). DA-training using token-aware contrastive learning (MLM+TaCL and MLM+TaCO). TaCL (Su et al., 2021) and TaCO (Fu et al., 2022) are two recent methods to improve BERT pre-training with token-aware contrastive loss. We change the backbone to RoBERTa for fair comparison. + +# 4.2 Implementation Detail + +Architecture. We adopt RoBERTaBASE as our backbone LM (12 layers and 12 attention heads in each layer). A masked language model head is applied for DA-training. The end-task fine-tuning of RoBERTa follows the standard practice. For the three ASC tasks (see Table 1), we adopt the ASC formulation in (Xu et al., 2019a), where the aspect (e.g., "sound") and review sentence (e.g., "The sound is great") are concatenated via $\langle /s \rangle$ . + +Hyperparameters. Unless otherwise stated, the same hyper-parameters are used in all experiments. The maximum input length is set to 164 which is long enough for all datasets. Adam optimizer is + +used for both DA-training and end-task fine-tuning. The max sequence length is set to 164, which is long enough for our end-tasks and only needs moderate computational resources. + +Domain-adaptive pre-training (DA-training). The learning rate is set to 1e-4 and batch size is 256. We train 2.5K steps for each domain, roughly a full pass through the domain data, following (Gururanget al., 2020; Xu et al., 2019a). The subset of data $\{\pmb{x}_m^{\mathrm{sub}}\}$ for computing $\mathcal{L}_{\mathrm{impt}}$ to determine head importance in Sec. 3.1 is set to 1.64 Million tokens, which is sufficient in our experiments. $\lambda_{1}$ in Eq. 16 is set to 1 and $\tau$ in Eq. 15 is set to 0.05. + +End-task fine-tuning. The learning rate is set to 1e-5 and batch size to 16. We train on end-task fine-tuning datasets for 5 epochs for Restaurant; 10 epochs for ACL, AI and PubMed; and 15 epochs for Phone and Camera. We simply take the results for the last epoch as we empirically found that the above number of epochs gives us stable and convergence results. + +# 4.3 Evaluation Results and Ablation Study + +We report the end-task results of the 10 baselines on the 6 datasets in Table 2. + +Superiority of DGA. Our DGA consistently outperforms all baselines. Thanks to the proposed more informed adaptation, DGA improves over the widely used traditional DA-training baseline MLM. We also see that MLM markedly outperforms RoBERTa (non-DA-training) on average (see the last column). We discuss more observations about the results below. + +(1). Training the entire LM in DGA helps achieve much better results. Using adapter (MLM (adapter)) and prompt (MLM (prompt)) have mixed results. This is because adapter and prompt do not have sufficient trainable parameters, which are also randomly initialized and can be difficult to train. +(2). DGA is also better than distillation-based systems: MLM+AdaptedDeiT and MLM+KD, which try to preserve the past knowledge. This is not surprising because the goal of DA-training is not simply preserving the previous knowledge but also to adapt/change it as needed to suit the target domain. DGA is specifically designed for this with soft-masking and contrasting of knowledge. +(3). The contrastive learning in DGA is more effective than the other contrastive alternatives (MLM+SimCSE, MLM+TaCL, MLM+TaCO and MLM+InfoWord). This indicates contrasting the + +
Domain +ModelCameraPhoneRestaurantAIACLPubMed +Micro-F1Avg
MF1Acc.MF1Acc.MF1Acc.MF1Acc.MF1Acc.
RoBERTa78.8287.0383.7586.0879.8187.0060.9871.8566.1171.2672.3873.64
MLM84.3989.9082.5985.5080.8487.6868.9775.9568.7573.4472.8476.40
MLM (Adapter)83.6289.2382.7185.3580.1987.1460.5571.3868.8772.9271.6874.60
MLM (Prompt)85.5290.3884.1786.5379.0086.4561.4772.3666.6671.3573.0974.98
MLM+KD82.7989.3080.0883.3380.4087.2567.7675.4668.1972.7372.3575.26
MLM+AdaptedDeiT86.8691.3783.0885.6479.7086.8469.7276.8369.1173.3572.6976.86
MLM+SimCSE84.9190.3583.4686.0880.8887.5969.1076.2569.8974.3072.7776.84
MLM+TaCL81.9888.8881.8784.9281.1287.5064.0473.1863.1870.3169.4673.61
MLM+TaCO84.5090.2282.6385.3279.2786.6859.7371.2263.6670.3672.3873.69
MLM+InfoWord87.9591.9284.5886.8481.2487.8268.2975.9268.5873.6873.2177.31
DGA88.5292.4985.4787.4581.8388.2071.9978.0671.0174.7373.6578.74
+ +Table 2: We report the macro-F1 (MF1) and accuracy results for all datasets, except for CHEMPORT in the PubMed domain, for which we use micro-F1 following Gururangan et al. (2020); Dery et al. (2021); Beltagy et al. (2019). The results are averages of 5 random seeds (the standard deviation is reported in Appendix B). The average column (Avg) is the average over the MF1 (or Micro-F1 for PubMed) for all datasets. + +general and full knowledge for knowledge integration is important. + +Effectiveness of the proxy KL-divergence loss. We use the proposed proxy KL-divergence loss to compute the head importance to identify the general language knowledge in the LM without using the LM's original pre-training data (Sec. 3.1). + +For evaluation, we are interested in how good the proxy is. Since we don't have the data that pretrains RoBERTa, it is not obvious how to assess the quality of the proxy directly. Here, we provide some indirect evidences to show the effectiveness of the proxy for computing the importance of units to the general knowledge in the LM. + +We conduct a separate experiment to compare the attention heads' importance score vectors after applying the proxy using the data from different domains. For each domain $i$ , we compare its importance vector with the importance vector of every other domain, and then average the cosine similarities to get the value for domain $i$ . We get 0.92 for Restaurant, the same 0.91 for ACL, AI, and Phone, 0.89 for PubMed and 0.92 for Camera. We see that different domains give similar importance values, which indirectly show that our proxy can identify the common general knowledge. + +We also compute the importance score distributions of the proxy. For each of the 6 domains, after applying the proxy, around $20\%$ of the attention heads are heavily protected $(0.8 \leq I_{lh}^{\mathrm{norm}} \leq 1.0)$ and another $20\%$ moderately protected $(0.6 \leq I_{lh}^{\mathrm{norm}} < 0.8)$ , which indicate the general knowledge. While Phone, AI, Camera and Restaurant share a similar distribution, ACL and PubMed protect slightly less. This is understandable as PubMed + +and ACL (medical or NLP publications) are probably less common than the other domains and the general knowledge in the LM covers them less. + +Ablation study. To better understand DGA, We want to know (1) whether constraining the neurons in other layers are helpful (the proposed DGA only constrains the attention heads), and (2) where the gain of DGA is from. To answer (1), we constrain the training of different layers in a standard Transformer. In Table 3 (rows 3-5), "H", "I", and "O" refer to attention head, intermediate layer, output layer in a standard Transformer layer, respectively. "E" refers to the embedding layers. The brackets with combination of "H, I, O, E" indicate the location we apply the soft-masking (DGA only applies soft-masking in the attention head). We can see their results are similar or worse than DGA, implying that attention heads are more indicative of important knowledge. To answer (2), we conduct the following ablation experiments: (i) DGA (w/o contrast), without the contrastive loss, but only soft-masking the backward pass according to the attention head importance. (ii) DGA (random masking) with randomly generated attention head importance scores and using them to do soft-masking and contrastive learning. (iii) Ensemble (LM+MLM) performs the end-task fine-tuning on both the MLM DA-trained RoBERTa (conventional DA-training) and the original RoBERTa (LM) by concatenating their outputs and taking the average. (iv) DGA (domain-specific) refers to the variant that contrasts domain-specific and general knowledge (see Sec. 3.2).15 + +
Domain +ModelCameraPhoneRestaurantAIACLPubMed Micro-F1Avg
MF1Acc.MF1Acc.MF1Acc.MF1Acc.MF1Acc.
RoBERTa78.8287.0383.7586.0879.8187.0060.9871.8566.1171.2672.3873.64
MLM84.3989.9082.5985.5080.8487.6868.9775.9568.7573.4472.8476.40
DGA (H, I)86.7991.6084.2186.4081.3287.9171.0777.3669.5073.8273.3477.71
DGA (H, I, O)88.0492.0185.8587.6381.4587.7971.5477.6170.5274.5873.1078.42
DGA (H, I, O, E)87.0591.6083.7486.1180.6487.6172.6478.1771.2474.9673.5478.14
DGA (w/o contrast)86.1990.8984.4886.6581.7087.9368.2575.4969.3173.7372.7277.11
DGA (random mask)82.0789.3083.8686.3380.6087.5269.5176.6469.5973.7372.9276.43
Ensemble (LM+MLM)85.2290.6485.1587.2379.8686.9865.1074.4368.5673.4472.6076.08
DGA (domain-specific)88.0692.0483.4585.8281.7287.9068.0075.5770.9175.0673.1777.55
DGA88.5292.4985.4787.4581.8388.2071.9978.0671.0174.7373.6578.74
+ +Table 3: Ablation results - averages of 5 random seeds. The standard deviations are reported in Appendix B. + +Table 3 shows that the full DGA always gives the best result, indicating every component contributes. Additional observations are as follows: + +(1) DGA's gain is partially from the novel softmasking: we can see that on average, DGA (w/o contrast) outperforms conventional DA-training (MLM). Besides, our gradient-based mask is informative: we can see DGA (random mask) is worse than DGA (w/o contrast) on all datasets. DGA (w/o contrast) is even better than Ensemble, which directly combines the information given by both the original LM and the traditional DA-trained model during end-task fine-tuning + +(2) Besides soft-masking, contrasting the general and full knowledge also helps. We can see DGA outperforms DGA (w/o contrast) and DGA (domain-specific) in all datasets. + +# 5 Conclusion + +This paper argued that an effective DA-training method should effectively integrate the target domain knowledge to the general knowledge in the LM. Existing approaches do not explicitly do this. This paper proposed a novel method DGA to achieve it (1) by estimating the attention heads importance in LM and using the importance scores to soft-mask the attention heads in DA-training to preserve the important knowledge in LM as much as possible, and (2) by contrasting the general and the full knowledge. Extensive experiment results demonstrated the effectiveness of the proposed approach DGA. + +# 6 Limitations + +While effective, DGA has some limitations. First, the main focus of DGA is to adapt an LM to a + +trastive learning relies on soft-masking. If removed, contrastive loss will not have the additional negative samples and our DGA becomes MLM+SimCSE. + +given target domain. It does not consider the generalization to other domains. For example, it will be interesting to incrementally or continually adapt an LM to more and more domains to make the LM more useful. Second, the importance of parameters for general knowledge in the LM is computed using a proxy method based on model robustness. Although it is quite effective, it is interesting to explore other approaches to further improve it. We will work on these in our future work as specializing and improving an LM is an important problem. + +# Acknowledgments + +The work of Zixuan Ke and Bing Liu was supported in part by three National Science Foundation (NSF) grants (IIS-1910424, IIS-1838770, and CNS-2225427). + +# References + +Emily Alsentzer, John R Murphy, Willie Boag, WeiHung Weng, Di Jin, Tristan Naumann, and Matthew McDermott. 2019. Publicly available clinical bert embeddings. arXiv preprint arXiv:1904.03323. +Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciBERT: A pretrained language model for scientific text. +Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems. +Tuhin Chakrabarty, Christopher Hidey, and Kathleen McKeown. 2019. Imho fine-tuning improves claim detection. arXiv preprint arXiv:1905.07000. +Tianlong Chen, Jonathan Frankle, Shiyu Chang, Sijia Liu, Yang Zhang, Zhangyang Wang, and Michael Carbin. 2020a. The lottery ticket hypothesis for pretrained bert networks. Advances in neural information processing systems, 33:15834-15846. + +Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020b. A simple framework for contrastive learning of visual representations. In International conference on machine learning, pages 1597-1607. PMLR. +Zhiyuan Chen and Bing Liu. 2018. Lifelong machine learning. Synthesis Lectures on Artificial Intelligence and Machine Learning, 12(3):1-207. +Lucio M Dery, Paul Michel, Ameet Talwalkar, and Graham Neubig. 2021. Should we be pre-training? an argument for end-task aware training as an alternative. arXiv preprint arXiv:2109.07437. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT. +Xiaowen Ding, Bing Liu, and Philip S Yu. 2008. A holistic lexicon-based approach to opinion mining. In Proceedings of the 2008 international conference on web search and data mining. +Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929. +Angela Fan, Edouard Grave, and Armand Joulin. 2020. Reducing transformer depth on demand with structured dropout. In International Conference on Learning Representations. +Zhiyi Fu, Wangchunshu Zhou, Jingjing Xu, Hao Zhou, and Lei Li. 2022. Contextual representation learning beyond masked language modeling. In ACL. +Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021a. Simcse: Simple contrastive learning of sentence embeddings. arXiv preprint arXiv:2104.08821. +Yang Gao, Nicolo Colombo, and Wei Wang. 2021b. Adapting by pruning: A case study on bert. arXiv preprint arXiv:2105.03343. +Yiwen Guo, Anbang Yao, and Yurong Chen. 2016. Dynamic network surgery for efficient dnns. Advances in neural information processing systems, 29. +Suchin Gururangan, Ana Marasovic, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks. In ACL. +Song Han, Jeff Pool, John Tran, and William Dally. 2015. Learning both weights and connections for efficient neural network. Advances in neural information processing systems, 28. + +Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. 2020. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9729-9738. +Geoffrey Hinton, Oriol Vinyals, Jeff Dean, et al. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2(7). +Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin de Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for NLP. In ICML. +Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In Proceedings of ACM SIGKDD. +David Jurgens, Srijan Kumar, Raine Hoover, Daniel A. McFarland, and Dan Jurafsky. 2018. Measuring the evolution of a scientific field through citation frames. TACL. +Lingpeng Kong, Cyprien de Masson d'Autume, Lei Yu, Wang Ling, Zihang Dai, and Dani Yogatama. 2020. A mutual information maximization perspective of language representation learning. In ICLR. +Jens Kringelum, Sonny Kim Kjaerulff, Søren Brunak, Ole Lund, Tudor I Oprea, and Olivier Taboureau. 2016. Chemprot-3.0: a global chemical biology diseases mapping. Database, 2016. +Cheng-I Jeff Lai, Yang Zhang, Alexander H Liu, Shiyu Chang, Yi-Lun Liao, Yung-Sung Chuang, Kaizhi Qian, Sameer Khurana, David Cox, and Jim Glass. 2021. Parp: Prune, adjust and re-prune for self-supervised speech recognition. NeurIPS, 34. +Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240. +Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. In EMNLP. +Jiaoda Li, Ryan Cotterell, and Mrinmaya Sachan. 2021. Differentiable subset pruning of transformer heads. Transactions of the Association for Computational Linguistics, 9:1442-1459. +Zi Lin, Jeremiah Zhe Liu, Zi Yang, Nan Hua, and Dan Roth. 2020. Pruning redundant mappings in transformer models via spectral-normalized identity prior. arXiv preprint arXiv:2010.01791. +Bing Liu. 2015. Sentiment analysis: Mining opinions, sentiments, and emotions. Cambridge University Press. + +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. CoRR. +Kyle Lo, Lucy Lu Wang, Mark Neumann, Rodney Kinney, and Daniel S. Weld. 2020. S2ORC: the semantic scholar open research corpus. In ACL. +Yi Luan, Luheng He, Mari Ostendorf, and Hannaneh Hajishirzi. 2018. Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction. In ACL. +JS McCarley, Rishav Chakravarti, and Avirup Sil. 2019. Structured pruning of a bert-based question answering model. arXiv preprint arXiv:1910.06360. +Michael McCloskey and Neal J Cohen. 1989. Catastrophic interference in connectionist networks: The sequential learning problem. In Psychology of learning and motivation. +Paul Michel, Omer Levy, and Graham Neubig. 2019. Are sixteen heads really better than one? Advances in neural information processing systems, 32. +Jianmo Ni, Jiacheng Li, and Julian J. McAuley. 2019. Justifying recommendations using distantly-labeled reviews and fine-grained aspects. In EMNLP, pages 188-197. Association for Computational Linguistics. +Hassan Sajjad, Fahim Dalvi, Nadir Durrani, and Preslav Nakov. 2020. On the effect of dropping layers of pre-trained transformer models. arXiv preprint arXiv:2004.03844. +Yixuan Su, Fangyu Liu, Zaiqiao Meng, Lei Shu, Ehsan Shareghi, and Nigel Collier. 2021. Tacl: Improving bert pre-training with token-aware contrastive learning. arXiv preprint arXiv:2111.04198. +Chi Sun, Xipeng Qiu, Yige Xu, and Xuanjing Huang. 2019. How to fine-tune bert for text classification? In China national conference on Chinese computational linguistics, pages 194-206. Springer. +Duyu Tang, Bing Qin, and Ting Liu. 2016. Aspect level sentiment classification with deep memory network. In EMNLP. +Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Hervé Jégou. 2021. Training data-efficient image transformers & distillation through attention. In International Conference on Machine Learning, pages 10347-10357. PMLR. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS. + +Elena Voita, David Talbot, Fedor Moiseev, Rico Senrich, and Ivan Titov. 2019. Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned. arXiv preprint arXiv:1905.09418. +Hu Xu, Bing Liu, Lei Shu, and Philip S. Yu. 2019a. BERT post-training for review reading comprehension and aspect-based sentiment analysis. In NAACL-HLT. +Hu Xu, Bing Liu, Lei Shu, and Philip S Yu. 2019b. Review conversational reading comprehension. arXiv preprint arXiv:1902.00821. + +
Domain +ModelCameraPhoneRestaurantAIACLPubMed +Micro-F1
MF1Acc.MF1Acc.MF1Acc.MF1Acc.MF1Acc.
RoBERTa±0.0403±0.0179±0.0210±0.0154±0.0117±0.0049±0.0646±0.0347±0.0192±0.0096±0.0071
MLM±0.0479±0.0298±0.0165±0.0103±0.0096±0.0056±0.0117±0.0086±0.0218±0.0118±0.0035
MLM (adapter)±0.0165±0.0110±0.0265±0.0181±0.0102±0.0068±0.0551±0.0288±0.0142±0.0099±0.0055
MLM (prompt)±0.0243±0.0138±0.0126±0.0087±0.0060±0.0035±0.0301±0.0124±0.0068±0.0108±0.0028
MLM+KD±0.0295±0.0158±0.0320±0.0230±0.0099±0.0070±0.0345±0.0224±0.0292±0.0155±0.0093
MLM+AdaptedDeiT±0.0187±0.0122±0.0160±0.0101±0.0048±0.0022±0.0250±0.0179±0.0065±0.0079±0.0086
MLM+SimCSE±0.0114±0.0077±0.0098±0.0065±0.0029±0.0016±0.0086±0.0056±0.0054±0.0071±0.0027
MLM+TaCL±0.0218±0.0103±0.0230±0.0159±0.0105±0.0059±0.0275±0.0156±0.0713±0.0394±0.0118
MLM+TaCO±0.0456±0.0232±0.0166±0.0134±0.0077±0.0052±0.0675±0.0380±0.0207±0.0128±0.0099
MLM+InfoWord±0.0267±0.0139±0.0272±0.0191±0.0170±0.0089±0.0344±0.0219±0.0070±0.0079±0.0072
DGA±0.0095±0.0047±0.0127±0.0094±0.0052±0.0040±0.0127±0.0081±0.0079±0.0080±0.0034
+ +Table 4: Standard deviations of the corresponding metrics of the proposed DGA model and the baselines on the six experiments. + +
Domain +ModelCameraPhoneRestaurantAIACLPubMed +Micro-F1
MF1Acc.MF1Acc.MF1Acc.MF1Acc.MF1Acc.
RoBERTa±0.0403±0.0179±0.0210±0.0154±0.0117±0.0049±0.0646±0.0347±0.0192±0.0096±0.0071
MLM±0.0479±0.0298±0.0165±0.0103±0.0096±0.0056±0.0117±0.0086±0.0218±0.0118±0.0035
DGA (H, I)±0.0373±0.0210±0.0032±0.0039±0.0054±0.0045±0.0095±0.0048±0.0094±0.0073±0.0049
DGA (H, I, O)±0.0167±0.0092±0.0182±0.0155±0.0055±0.0033±0.0093±0.0075±0.0080±0.0070±0.0056
DGA (H, I, O, E)±0.0237±0.0123±0.0270±0.0187±0.0099±0.0050±0.0109±0.0089±0.0067±0.0057±0.0079
DGA (w/o contrast)±0.0433±0.0251±0.0135±0.0106±0.0060±0.0040±0.0197±0.0119±0.0132±0.0093±0.0050
DGA (random mask)±0.0879±0.0413±0.0335±0.0235±0.0096±0.0044±0.0153±0.0090±0.0105±0.0059±0.0052
Ensemble±0.0332±0.0178±0.0199±0.0139±0.0035±0.0031±0.0236±0.0103±0.0061±0.0028±0.0046
DGA (domain-specific)±0.0137±0.0070±0.0259±0.0200±0.0031±0.0018±0.0128±0.0071±0.0108±0.0067±0.0043
DGA±0.0095±0.0047±0.0127±0.0094±0.0052±0.0040±0.0127±0.0081±0.0079±0.0080±0.0034
+ +Table 5: Standard deviations of the corresponding metrics of the proposed DGA model and the ablation on the six experiments. + +# A Datasets Details + +Table 2 in the main paper has given the number of examples in each dataset. Here we provide additional details about the 4 types of end-tasks. + +(1) (Phone, Camera and Restaurant) Aspect Sentiment Classification (ASC) is defined as follows (Liu, 2015): given an aspect or product feature (e.g., picture quality in a camera review) and a review sentence containing the aspect in a domain or product category (e.g., camera), classify if the sentence expresses a positive, negative, or neutral (no opinion) sentiment or polarity about the aspect (for Phone and Camera, there are only negative and positive polarities in the data). +(2) (ACL) Citation Intent Classification is defined as follows: given a citing sentence (a sentence contains a citation), classify if the sentence expresses a citation function among "background", "motivation", "uses", "extension" and "comparison or contrast future". +(3) (AI) Relation Classification is defined as follows: given a within-sentence word sequence spans containing a pair of entities, classify if the span expresses a relation among “feature of”, “conjunction”, “evaluate for”, “hyponym of”, “used for”, “part of” and “compare”. + +(4) (PubMed) Chemical-protein Interaction Classification is defined as follows: given a span containing a pair of chemical and protein, classify if the span expresses a chemical-protein interaction among "downregulator", "substrate", "indirect-upregulator", "indirect-downregulator", "agnonist", "activator", "product of", "agonist-activator", "inhibitor", "upregulator", "substrate product of", "agonist-inhibitor" and "antagonist". + +# B Standard Deviations + +Table 4 reports the standard deviations of the corresponding results in Table 2 (in the main paper) of DGA and the considered baselines over 5 runs with random seeds. We can see the results of DGA are stable. Some baselines (e.g., RoBERTa in AI, MLM in Camera and MLM+TaCL in ACL) can have quite large standard deviations. + +Table 5 reports the standard deviations of the corresponding results in Table 3 (in the main paper) of DGA and the considered baselines over 5 runs with random seeds. We can see the results of DGA are stable. Some baselines (e.g., DGA (random mask) and DGA (w/o contrast) in Camera) can have quite large standard deviations. \ No newline at end of file diff --git a/adaptingalanguagemodelwhilepreservingitsgeneralknowledge/images.zip b/adaptingalanguagemodelwhilepreservingitsgeneralknowledge/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..67943c5382d05c4607a993bf3bad21a792aae732 --- /dev/null +++ b/adaptingalanguagemodelwhilepreservingitsgeneralknowledge/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d2693c2d657b2bed74c82ee1006b8ec80e4d86a582f75f0f304a6867aec345b3 +size 635975 diff --git a/adaptingalanguagemodelwhilepreservingitsgeneralknowledge/layout.json b/adaptingalanguagemodelwhilepreservingitsgeneralknowledge/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..a13a24d89e9640171fa2185724517546b5ecabd6 --- /dev/null +++ b/adaptingalanguagemodelwhilepreservingitsgeneralknowledge/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b2168ba7d47b3fa93ec5bb29fb7f002fb8106e807aeeb7f3f1fa324ace589d3b +size 451367 diff --git a/adaptivecontrastivelearningonmultimodaltransformerforreviewhelpfulnessprediction/ca62769c-92be-4c2a-9c9c-9cf26e3d6c1d_content_list.json b/adaptivecontrastivelearningonmultimodaltransformerforreviewhelpfulnessprediction/ca62769c-92be-4c2a-9c9c-9cf26e3d6c1d_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..f3aa0d3a754ec3586df497ce5b23ea252ac3c120 --- /dev/null +++ b/adaptivecontrastivelearningonmultimodaltransformerforreviewhelpfulnessprediction/ca62769c-92be-4c2a-9c9c-9cf26e3d6c1d_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:78de84aaac48d4408df9b598361b0ceff88657bacb8b7cd9c4a592ac2e772cf9 +size 88855 diff --git a/adaptivecontrastivelearningonmultimodaltransformerforreviewhelpfulnessprediction/ca62769c-92be-4c2a-9c9c-9cf26e3d6c1d_model.json b/adaptivecontrastivelearningonmultimodaltransformerforreviewhelpfulnessprediction/ca62769c-92be-4c2a-9c9c-9cf26e3d6c1d_model.json new file mode 100644 index 0000000000000000000000000000000000000000..3e59264d33fd1e157fbf98b498c0674795d2c17c --- /dev/null +++ b/adaptivecontrastivelearningonmultimodaltransformerforreviewhelpfulnessprediction/ca62769c-92be-4c2a-9c9c-9cf26e3d6c1d_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5c394f9157b599b6173029ff6165fd186ac57b8b06fbbb068d235d4527b710bd +size 105788 diff --git a/adaptivecontrastivelearningonmultimodaltransformerforreviewhelpfulnessprediction/ca62769c-92be-4c2a-9c9c-9cf26e3d6c1d_origin.pdf b/adaptivecontrastivelearningonmultimodaltransformerforreviewhelpfulnessprediction/ca62769c-92be-4c2a-9c9c-9cf26e3d6c1d_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..76b88897e8c51feaf7bc8ee376f191c56d385248 --- /dev/null +++ b/adaptivecontrastivelearningonmultimodaltransformerforreviewhelpfulnessprediction/ca62769c-92be-4c2a-9c9c-9cf26e3d6c1d_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:045afce17a18eca027db9e470a49fc90cc2c901e9b0a1b8d185d77b65e928d50 +size 1302680 diff --git a/adaptivecontrastivelearningonmultimodaltransformerforreviewhelpfulnessprediction/full.md b/adaptivecontrastivelearningonmultimodaltransformerforreviewhelpfulnessprediction/full.md new file mode 100644 index 0000000000000000000000000000000000000000..9ad9ef7a2497343ecf84dc6f5d128f2fa3e461c0 --- /dev/null +++ b/adaptivecontrastivelearningonmultimodaltransformerforreviewhelpfulnessprediction/full.md @@ -0,0 +1,476 @@ +# Adaptive Contrastive Learning on Multimodal Transformer for Review Helpfulness Predictions + +Thong Nguyen $^{1,2}$ , Xiaobao Wu $^{3}$ , Anh Tuan Luu $^{3*}$ , Cong-Duy Nguyen $^{3}$ , Zhen Hai $^{4}$ , Lidong Bing $^{4}$ + +$^{1}$ National University of Singapore, Singapore + +$^{2}$ VinAI Research, Vietnam + +$^{3}$ Nanyang Technological University, Singapore + +$^{4}$ DAMO Academy, Alibaba Group + +e0998147@u.nus.edu, anhtuan.lu@ntu.edu.sg + +# Abstract + +Modern Review Helpfulness Prediction systems are dependent upon multiple modalities, typically texts and images. Unfortunately, those contemporary approaches pay scarce attention to polish representations of cross-modal relations and tend to suffer from inferior optimization. This might cause harm to model's predictions in numerous cases. To overcome the aforementioned issues, we propose Multimodal Contrastive Learning for Multimodal Review Helpfulness Prediction (MRHP) problem, concentrating on mutual information between input modalities to explicitly elaborate cross-modal relations. In addition, we introduce Adaptive Weighting scheme for our contrastive learning approach in order to increase flexibility in optimization. Lastly, we propose Multimodal Interaction module to address the unalignment nature of multimodal data, thereby assisting the model in producing more reasonable multimodal representations. Experimental results show that our method outperforms prior baselines and achieves state-of-the-art results on two publicly available benchmark datasets for MRHP problem. + +# 1 Introduction + +Current e-commerce sites such as Amazon, eBay, etc., construct review platforms to collect user feedback concerning their products. These platforms play a fundamental role in online transactions since they help future consumers collect useful reviews which assist them in deciding whether to make the purchase or not. Unfortunately, nowadays the number of user-generated reviews is overwhelming, raising doubts related to the relevance and veracity of reviews. Therefore, there is a need to verify the quality of reviews before publishing them to prospective customers. As a result, this inspires a recent surge of interest targeting the Review Helpfulness Prediction (RHP) problem. + +# Product Information + +The Cooks Standard 6-Quart Stainless Steel Stockpot with Lid is made with 18/10 stainless steel with an aluminum disc layered in the bottom. The aluminum disc bottom provides even heat distribution and prevents hot spots. Tempered glass lid with steam hole vent makes viewing food easy. Stainless steel riveted handles offer durability. Induction compatible. Works on gas, electric, glass, ceramic, etc. Oven safe to 500F, glass lid to 350F. Dishwasher safe. + +![](images/174548bbf93175834f65091b5e4d094d8fb3cf6a13a9966e8122e24b649c6d3e.jpg) + +![](images/dd1f6ee6aa84d97b0993de067aa736b569379e49dea55eda2ef2999047a6a709.jpg) + +# Review 1 + +I needed a stainless steel pot for canning my tomatoes. I learned the hard way that you have to use a non-reactive pot or else your end result will be inedible (I thought I was using stainless steel but quickly realized it wasnt) I headed to Amazon and came across this Cooks Standard SS Cookpot with cover and bought it after reading the reviews. I have had it for just under a year and it still looks just as good as the day I bought it. I couldn't be happier with my purchase! Oh, and by the way, this one actually is stainless steel unlike the other pot I bought that said it was and wasn't. + +# Review 2 + +I ordered it on May 21st. What a waste of time and money. + +![](images/4ecb3375a8250cb7010b77ff918e2115bb31e4e1fac02d2dad54892170dd000a.jpg) + +
Review 1Review 2
Label score41
MCR score0.1683.637
Our Model score4.6510.743
+ +Table 1: Example of unreasonable predictions in the Multimodal Review Helpfulness Prediction task. + +Two principal groups of early efforts focus on purely textual data. The first group follows feature engineering techniques, retrieving argument-based features (Liu et al., 2017), lexical features (Kr- + +ishnamoorthy, 2015), and semantic features (Kim et al., 2006), as input to their classifier. Inherently, their methods are labor-intensive and vulnerable to the typical issues of conventional machine learning methods. Instead of relying on manual features, the second group leverages deep neural models, for instance, RNN (Alsmadi et al., 2020) and CNN (Chen et al., 2018), to learn rich features automatically. Nonetheless, their approach is ineffective because the helpfulness of a review is not only contingent upon textual information but also other modalities. + +To cope with the above issues, recent works (Liu et al., 2021b; Han et al., 2022) proposed to utilize multi-modality via the Multi-perspective Coherent Reasoning (MCR) model. Hypothesizing that a review is helpful if it exhibits coherent text and images with the product information, those works take into account both textual and visual modality of the inputs, then estimate their coherence level to discern whether the reviews are helpful or unhelpful. However, the MCR model contains a detrimental drawback. Particularly, it aims to maximize the scores $s_p$ of positive (helpful) product-review pairs while minimizing those $s_n$ of negative (unhelpful) pairs. Hence, it was assumed that following the aforementioned manner would project features with similar semantics to stay close and those with disparate ones to be distant apart. Unfortunately, in multimodal learning, this was shown not to be the case, causing the model to learn ad-hoc representations (Zolfaghari et al., 2021). This is one reason leading to unreasonable predictions of MCR in Table 1. As it can be seen, even though Review 1 closely relates to the product of "6-Quart Stainless Steel Stockpot", the model classifies it as unhelpful. In addition, the target of Review 2's text content is vague because it does not specifically correspond to the "Stockpot". In fact, it can be used for any product. Moreover, the image does not clearly show any hint of the "Stockpot" as well. Despite such vagueness, the output of MCR for Review 2 is still helpful. + +As a remedy to this problem, we propose Cross-modal Contrastive Learning to mine the mutual information of cross-modal relations in the input to capture more sensible representations. Nonetheless, plainly applying symmetric gradient pattern, which is similar to MCR that they assign equivalent penalty to $s_n$ and $s_p$ , is inflexible. In cases that $s_p$ is small and $s_n$ is already negatively skewed, or both + +$s_p$ and $s_n$ are positively skewed, it is irrational to assign equivalent penalties to both $s_p$ and $s_n$ . Last but not least, MCR directly leverages Coherent Reasoning, repeatedly enforcing alignment among modalities in the input. This ignores the unaligned nature of multimodal input, for example, images might only refer to a particular section in the text, hence do not completely align with the textual content. In consequence, strictly forming alignment can make the model learn inefficient multimodal representations (Tsai et al., 2019). + +To overcome the above problems, we propose an adaptive scheme to accomplish the flexibility in the optimization of our contrastive learning stage. Finally, we propose to adopt a multimodal attention module that reinforces one modality's high-level features with low-level ones of other modalities. This not only relaxes the alignment assumption but also informs one modality of information of others, encouraging refined representation learning. + +In sum, our contributions are three-fold: + +- We propose an Adaptive Cross-modal Contrastive Learning for Review Helpfulness Prediction task by polishing cross-modal relation representations. +- We propose a Multimodal Interaction module which correlates modalities' features without depending upon the alignment assumption. +- We conducted extensive experiments on two datasets for the RHP problem and found that our method outperforms other baselines which are both textual-only and multimodal, and obtains state-of-the-art results on those benchmarks. + +# 2 Model Architecture + +In this section we delineate the overall architecture of our MRHP model. Particular modules of our system are depicted in Figure 1. + +# 2.1 Problem Definition + +Given a product item $p$ , which consists of a description $T^p$ and images $I^p$ , and a set of reviews $R = \{r_1, \dots, r_N\}$ , where each review is composed of user-generated text $T_i^r$ and images $I_i^r$ , RHP model's task is to generate the scores + +$$ +s _ {i} = f \left(p, r _ {i}\right), \quad 1 \leq i \leq N \tag {1} +$$ + +![](images/da1ed452d5410c14fe8fa3fae7559e8b1df3ece86cd5d66fa5aea6e32fa0e4e6.jpg) +Figure 1: Diagram of our Multimodal Review Helpfulness Prediction model. + +where $N$ is the number of reviews for product $p$ and $f$ is the scoring function of the RHP model. Empirically, each score estimated by $f$ indicates the helpfulness level of each review, and the ground-truth is the descending sort order of helpfulness scores. + +# 2.2 Encoding Modules + +Our model accepts product description $T^p$ , product images $I^p$ , review text $T_i^r$ , and review images $I_i^r$ as input. The encoding process of those elements is described as follows. + +Text Encoding Product description and review text are sequences of words. Each sequence is indexed into the word embedding layer and then passed into the respective LSTM layer for product or review. + +$$ +K ^ {p} = \operatorname {L S T M} ^ {p} \left(\mathbf {W} _ {\mathbf {e m b}} \left(T ^ {p}\right)\right) \tag {2} +$$ + +$$ +K ^ {r} = \operatorname {L S T M} ^ {r} \left(\mathbf {W} _ {\mathbf {e m b}} \left(T ^ {r}\right)\right) \tag {3} +$$ + +where $K^p \in \mathbb{R}^{l_p \times d}$ , $K^r \in \mathbb{R}^{l_r \times d}$ , $l_p$ and $l_r$ are the sequence lengths of product and review text respectively, and $d$ is the hidden size. + +Image Encoding We follow Anderson et al. (2018) to take detected objects as embeddings of the image. In particular, a pre-trained Faster R-CNN is applied to extract ROI features for $m$ objects $\{\mathbf{a}_1,\mathbf{a}_2,\dots ,\mathbf{a}_m\}$ from the product and review images. Subsequently, we encode extracted features using the self-attention module (SelfAttn) (Vaswani + +et al., 2017) + +$$ +A = \operatorname {S e l f A t t n} \left(\left\{\mathbf {a} _ {1}, \mathbf {a} _ {2}, \dots , \mathbf {a} _ {m} \right\}\right) \tag {4} +$$ + +where $A \in \mathbb{R}^{m \times d}$ and $d$ is the hidden size. Here we use $A^p$ and $A^r$ to indicate product and review image features, respectively. + +# 2.3 Multimodal Interaction Module + +We consider two components $\gamma, \eta$ with their inputs $X_{\gamma}, X_{\eta}$ , where $\eta$ is the concatenation of input elements apart from the one in $\gamma$ . For instance, if $\gamma = K^p$ , then $\eta = [K^r, A^p, A^r]$ , where $[.,.]$ indicates the concatenation operation. We define each cross-modal attention block to have three components $Q, K$ , and $V$ : + +$$ +Q _ {\gamma} = X _ {\gamma} \cdot W _ {Q _ {\gamma}} \tag {5} +$$ + +$$ +K _ {\eta} = X _ {\eta} \cdot W _ {K _ {\eta}} \tag {6} +$$ + +$$ +V _ {\eta} = X _ {\eta} \cdot W _ {V _ {\eta}} \tag {7} +$$ + +where $W_{Q_{\gamma}} \in \mathbb{R}^{d_{\gamma} \times d_k}$ , $W_{K_{\eta}} \in \mathbb{R}^{d_{\eta} \times d_k}$ , and $W_{V_{\eta}} \in \mathbb{R}^{d_{\eta} \times d_v}$ are weight matrices. The interaction between $\gamma$ and $\eta$ is computed in the cross-attention manner + +$$ +Z _ {\gamma} = \mathrm {C M} _ {\gamma} \left(X _ {\gamma}, X _ {\eta}\right) = \operatorname {s o f t m a x} \left(\frac {Q _ {\gamma} \cdot K _ {\eta} ^ {T}}{\sqrt {d _ {k}}}\right) \cdot V _ {\eta} \tag {8} +$$ + +Our full module comprises $D$ layers of the above-mentioned attention block, as indicated in the right + +part of Figure 1. Theoretically, the computation is carried out as follows + +$$ +Q _ {\gamma} [ 0 ] = X _ {\gamma} \tag {9} +$$ + +$$ +T [ i ] = \operatorname {C M} _ {\gamma} [ i ] (\operatorname {L N} (Q _ {\gamma} [ i - 1 ]), \operatorname {L N} (X _ {\eta})) \tag {10} +$$ + +$$ +U _ {\gamma} [ i ] = T [ i ] + Q _ {\gamma} [ i - 1 ] \tag {11} +$$ + +$$ +Q _ {\gamma} [ i ] = \operatorname {G e L U} \left(\operatorname {L i n e a r} \left(U _ {\gamma} [ i ]\right)\right) \tag {12} +$$ + +where $LN$ denotes layer normalization operator. We iteratively estimate cross-modal features for product text, product images, review text, and review images with a view to obtaining $H^{p}$ , $V^{p}$ , $H^{r}$ , and $V^{r}$ . + +$$ +H ^ {p} = Q _ {k} ^ {p} [ D ], \quad V ^ {p} = Q _ {a} ^ {p} [ D ] \tag {13} +$$ + +$$ +H ^ {r} = Q _ {k} ^ {r} [ D ], \quad V ^ {r} = Q _ {a} ^ {r} [ D ] \tag {14} +$$ + +After our cross-modal interaction module, we proceed to pass features to undertake relation fusion in three paths: intra-modal, inter-modal, and intra-review. + +Intra-modal Fusion The intra-modal alignment is calculated for two relation kinds: (1) product text - review text and (2) product image - review image. Firstly, we learn alignment among intramodal features via self-attention modules + +$$ +H ^ {\text {i n t r a M}} = \operatorname {S e l f A t t n} \left(\left[ H ^ {p}, H ^ {r} \right]\right) \tag {15} +$$ + +$$ +V ^ {\text {i n t r a M}} = \operatorname {S e l f A t t n} \left(\left[ V ^ {p}, V ^ {r} \right]\right) \tag {16} +$$ + +Then intra-modal hidden representations are fed to a CNN, and continuously a max-pooling layer to attain salient entries + +$$ +\mathbf {z} ^ {\text {i n t r a M}} = \operatorname {M a x P o o l} \left(\operatorname {C N N} \left(\left[ H ^ {\text {i n t r a M}}, V ^ {\text {i n t r a M}} \right]\right)\right) \tag {17} +$$ + +Inter-modal Fusion Similar to intra-modal alignment, inter-modal one is calculated for two types of relations as well: (1) product text - review image and (2) product image - review text. The first step is also to relate feature components using self-attention modules + +$$ +H ^ {\text {p r d} \cdot \text {t x t - r v w} \cdot \text {i m g}} = \operatorname {S e l f A t t n} ([ H ^ {p}, V ^ {r} ]) \tag {18} +$$ + +$$ +H ^ {\text {p r d} - \text {i m g} - \text {r v w} - \text {t x t}} = \operatorname {S e l f A t t n} ([ V ^ {p}, H ^ {r} ]) \tag {19} +$$ + +We adopt a mean-pool layer to aggregate inter-modal features and then concatenate the pooled vectors to construct the final inter-modal represent + +tation + +$$ +I ^ {\text {p r d} \_ \text {t x t} - \text {r e v} \_ \text {i m g}} = \operatorname {M e a n P o o l} \left(H ^ {\text {p r d} \_ \text {t x t} - \text {r w w} \_ \text {i m g}}\right) \tag {20} +$$ + +$$ +I ^ {\text {p r d} - \text {i m g} - \text {r e v} - \text {t x t}} = \operatorname {M e a n P o o l} \left(H ^ {\text {p r d} - \text {i m g} - \text {r w} - \text {t x t}}\right) \tag {21} +$$ + +$$ +\mathbf {z} ^ {\text {i n t e r M}} = \left[ I ^ {\text {p r d} _ {\text {t x t}} - \text {r v w} _ {\text {i m g}}}, I ^ {\text {p r d} _ {\text {i m g}} - \text {r v w} _ {\text {t x t}}} \right] \tag {22} +$$ + +Intra-review Fusion The estimation of intra-review module completely mimics the inter-modal manner. The only discrimination is that the estimation is taken upon two different relations: (1) product text - product image and (2) review text - review image. + +$$ +H ^ {\operatorname {p r d} _ {-} \operatorname {t x t} - \operatorname {p r d} _ {-} \operatorname {i m g}} = \operatorname {S e l f A t t n} ([ H ^ {p}, V ^ {p} ]) \tag {23} +$$ + +$$ +H ^ {\mathrm {r v w} - \text {t x t} - \text {r e v} - \text {i m g}} = \operatorname {S e l f A t t n} ([ H ^ {r}, V ^ {r} ]) \tag {24} +$$ + +$$ +G ^ {\text {p r d} - \text {t x t} - \text {p r d} - \text {i m g}} = \text {M e a n P o o l} \left(H ^ {\text {p r d} - \text {t x t} - \text {p r d} - \text {i m g}}\right) \tag {25} +$$ + +$$ +G ^ {\mathrm {r v w} _ {-} \mathrm {t x t} - \mathrm {r v w} _ {-} \mathrm {i m g}} = \operatorname {M e a n P o o l} \left(H ^ {\mathrm {r v w} _ {-} \mathrm {t x t} - \mathrm {r v w} _ {-} \mathrm {i m g}}\right) \tag {26} +$$ + +$$ +\mathbf {z} ^ {\text {i n t r a R}} = \left[ G ^ {\text {p r d} _ {-} \text {t x t} - \text {p r d} _ {-} \text {i m g}}, G ^ {\text {r v w} _ {-} \text {t x t} - \text {r v w} _ {-} \text {i m g}} \right] \tag {27} +$$ + +Finally, we concatenate intra-modal, inter-modal, and intra-review output, and then feed the concatenated vector to the linear layer to obtain the ranking score: + +$$ +\mathbf {z} ^ {\text {f i n a l}} = \left[ \mathbf {z} ^ {\text {i n t r a M}}, \mathbf {z} ^ {\text {i n t e r M}}, \mathbf {z} ^ {\text {i n t r a R}} \right] \tag {28} +$$ + +$$ +f (p, r _ {i}) = \operatorname {L i n e a r} \left(\mathbf {z} ^ {\text {f i n a l}}\right) \tag {29} +$$ + +# 3 Training Strategies + +# 3.1 Adaptive Cross-modal Contrastive Learning + +In this section, we explain the formulation and adaptive pattern along with its derivation of our Cross-modal Contrastive Learning. + +Cross-modal Contrastive Learning First of all, we extract hidden states of helpful product-review pairs. Second of all, hidden features are max-pooled to extract meaningful entries. + +$$ +\mathbf {h} ^ {p} = \operatorname {M a x P o o l} \left(H ^ {p}\right), \mathbf {h} ^ {r} = \operatorname {M a x P o o l} \left(H ^ {r}\right) \tag {30} +$$ + +$$ +\mathbf {v} ^ {p} = \operatorname {M a x P o o l} \left(V ^ {p}\right), \mathbf {v} ^ {r} = \operatorname {M a x P o o l} \left(V ^ {r}\right) \tag {31} +$$ + +We formulate our contrastive learning framework taking positive and negative pairs from the above-mentioned cross-modal features. In our framework, we hypothesize that pairs established by modalities + +of the same sample are positive, whereas those formed by modalities of distinct ones are negative. + +$$ +\mathcal {L} _ {\mathrm {C E}} = - \sum_ {i = 1} ^ {B} \sin \left(\mathbf {t} _ {i} ^ {1}, \mathbf {t} _ {i} ^ {2}\right) + \sum_ {j = 1, k = 1, j \neq k} ^ {B} \sin \left(\mathbf {t} _ {j} ^ {1}, \mathbf {t} _ {k} ^ {2}\right) \tag {32} +$$ + +where $\mathbf{t}^1, \mathbf{t}^2 \in \{\mathbf{h}^p, \mathbf{h}^r, \mathbf{v}^p, \mathbf{v}^r\}$ , and $B$ denotes the batch size in the training process. + +Adaptive Weighting The standard contrastive objective suffers from inflexible optimization due to irrational gradient assignment to positive and negative pairs. As a result, to tackle the problem, we propose the Adaptive Weighting Strategy for our contrastive framework. Initially, we introduce weights $\epsilon^p$ and $\epsilon^n$ to represent distances from the optimum, then integrate them into positive and negative terms of our loss. + +$$ +\begin{array}{l} \mathcal {L} _ {\text {A d a p t i v e C E}} = - \sum_ {i = 1} ^ {B} \epsilon_ {i} ^ {p} \cdot \operatorname {s i m} \left(\mathbf {t} _ {i} ^ {1}, \mathbf {t} _ {i} ^ {2}\right) \tag {33} \\ + \sum_ {j = 1, k = 1, j \neq k} ^ {B} \epsilon_ {j, k} ^ {n} \cdot \operatorname {s i m} \left(\mathbf {t} _ {j} ^ {1}, \mathbf {t} _ {k} ^ {2}\right) \\ \end{array} +$$ + +where $\epsilon_{i}^{p} = [o^{p} - \mathrm{sim}(\mathbf{t}_{i}^{1},\mathbf{t}_{i}^{2})]_{+}$ and $\epsilon_{j,k}^{n} = [\mathrm{sim}(\mathbf{t}_{j}^{1},\mathbf{t}_{k}^{2}) - o^{n}]_{+}$ . Investigating the intuition to determine the values for $o^p$ and $o^n$ , we continue to conduct derivation and arrive in the following theorem + +Theorem 1 Adaptive Contrastive Loss (33) has the hyperspherical form: + +$$ +\begin{array}{l} \mathcal {L} _ {\text {A d a p t i v e C E}} = \sum_ {i = 1} ^ {B} \left(\operatorname {s i m} \left(\mathbf {t} _ {i} ^ {1}, \mathbf {t} _ {i} ^ {2}\right) - \frac {o ^ {p}}{2}\right) ^ {2} \\ + \sum_ {j = 1, k = 1, j \neq k} ^ {B} \left(\operatorname {s i m} \left(\mathbf {t} _ {j} ^ {1}, \mathbf {t} _ {k} ^ {2}\right) - \frac {o ^ {n}}{2}\right) ^ {2} - C, \\ \end{array} +$$ + +$$ +w h e r e C > 0 +$$ + +We provide the proof for Theorem (1) in the Appendix section. As a consequence, theoretically the contrastive objective arrives in the optimum when $\mathrm{sim}(\mathbf{t}_i^1,\mathbf{t}_i^2) = \frac{o^p}{2}$ and $\mathrm{sim}(\mathbf{t}_j^1,\mathbf{t}_k^2) = \frac{o^n}{2}$ . Based upon this observation, in our experiments we set $o^p = 2$ and $o^n = 0$ . + +# 3.2 Training Objective + +For the Review Helpfulness Prediction problem, the model's parameters are updated according to + +the pairwise ranking loss as follows + +$$ +\mathcal {L} _ {\text {r a n k i n g}} = \sum_ {i} \max (0, \beta - f \left(p _ {i}, r ^ {+}\right) + f \left(p _ {i}, r ^ {-}\right)) \tag {34} +$$ + +where $r^+$ and $r^{-}$ are random reviews in which $r^+$ possesses a higher helpfulness level than $r^{-}$ . We jointly combine the contrastive goal with the ranking objective of the Review Helpfulness Prediction problem to train our model + +$$ +\mathcal {L} = \mathcal {L} _ {\text {A d a p t i v e C E}} + \mathcal {L} _ {\text {r a n k i n g}} \tag {35} +$$ + +# 4 Experiments + +
DatasetSplitCategory (Product / Review)
ClothingElectronics.Home
LazadaTrain & Dev8K/130K5K/52K4K/16K
Test2K/32K1K/13K1K/13K
AmazonTrain & Dev16K/349K13K/325K18K/462K
Test4K/87K3K/80K5K/111K
+ +Table 2: Statistics of MRHP datasets. + +# 4.1 Datasets + +We evaluate our methods on two publicly available benchmark datasets for MRHP task: Lazada-MRHP and Amazon-MRHP. + +Lazada-MRHP (Liu et al., 2021b) consists of product items and artificial reviews on Lazada.com, an e-commerce platform in Southeast Asia. All of the texts in the dataset are expressed in Indonesian. + +Amazon-MRHP (Liu et al., 2021b) is collected from Amazon.com, the large-scale international e-commerce platform. Product information and associated reviews are in English and extracted between 2016 and 2018. + +Both datasets comprise 3 categories: (i) Clothing, Shoes & Jewelry (Clothing), (ii) Electronics (Electronics), and (iii) Home & Kitchen (Home). We present the statistics of them in Table 2. + +# 4.2 Implementation Details + +We use a 1-layer LSTM with hidden dimension size of 128. We initialize our word embedding with fastText embedding (Bojanowski et al., 2017) for Lazada-MRHP dataset and 300-dimensional GloVe pretrained word vectors (Pennington et al., 2014) for Amazon-MRHP dataset. We set our multimodal attention module to have $D = 5$ attention layers. For the visual modality, we extract 2048-dimensional ROI features from each image and encode them into 128-dimensional vectors. Our + +
TypeMethodClothingElectronicsHome
MAPN@3N@5MAPN@3N@5MAPN@3N@5
Text-onlyBiMPM60.052.457.774.467.372.270.664.769.1
EG-CNN60.451.757.573.566.370.870.763.468.5
Conv-KNRM62.154.359.974.167.171.971.465.770.5
PRH-Net62.154.959.974.367.072.271.665.270.0
MultimodalSSE-Cross66.159.764.876.068.973.872.266.071.0
DR-Net66.560.765.376.169.274.072.466.371.4
MCR68.862.367.076.870.775.073.867.072.2
Our Model70.364.769.078.272.476.575.268.873.7
+ +Table 3: Helpfulness Prediction results on Lazada-MRHP dataset. + +
TypeMethodClothingElectronicsHome
MAPN@3N@5MAPN@3N@5MAPN@3N@5
Text-onlyBiMPM57.741.846.052.340.544.156.643.647.6
EG-CNN56.440.644.751.539.442.155.342.446.7
Conv-KNRM57.241.245.652.640.544.257.444.548.4
PRH-Net58.342.246.552.440.143.957.144.348.1
MultimodalSSE-Cross65.056.059.153.743.847.260.851.054.0
DR-Net65.256.159.253.944.247.561.251.854.6
MCR66.457.360.254.445.048.162.653.556.6
Our Model67.458.661.656.547.650.863.554.657.8
+ +Table 4: Helpfulness Prediction results on Amazon-MRHP dataset. + +entire model is trained end-to-end with Adam optimizer (Kingma and Ba, 2014) and batch size of 32. For the training objective, we set the value of the margin in the ranking loss to be 1. + +# 4.3 Baselines + +We compare our proposed architecture against the following baselines: + +- BiMPM (Wang et al., 2017): a ranking model which encodes input sentences in two directions to ascertain the matching result. +- Conv-KNRM (Dai et al., 2018): a CNN-based model which encodes n-gram of multiple lengths and uses kernel pooling to generate the final ranking score. +- EG-CNN (Chen et al., 2018): a CNN-based model targeting data scarcity and OOV problem in RHP task via taking advantage of character-based representations and domain discriminators. +- PRH-Net (Fan et al., 2019): a baseline to predict helpfulness of a review by taking into + +consideration both product text and product metadata. + +- DR-Net (Xu et al., 2020): a cross-modality approach that models contrast in associated contexts by leveraging decomposition and relation modules. +- SSE-Cross (Abavisani et al., 2020): multimodal model to fuse different modalities with stochastic shared embeddings. +- MCR (Liu et al., 2021b): a baseline model focusing on coherent reasoning. + +# 4.4 Automatic Evaluation + +In Table 3 and 4, we follow previous work (Liu et al., 2021b) to report Mean Average Precision (MAP), Normalized Discounted Cumulative Gain (NDCG@N) (Järvelin and Kekäläinen, 2017) where $N = 3$ and $N = 5$ . As it can be seen, multimodal approaches achieve better performance than text-only ones. + +For Lazada-MRHP dataset, we achieve an absolute improvement of NDCG@3 of 2.4 points in + +
DatasetClothingElectronicsHome
MAPN@3N@5MAPN@3N@5MAPN@3N@5
Lazada4.48·10-21.55·10-23.93·10-24.54·10-31.05·10-42.63·10-31.09·10-33.40·10-23.68·10-3
Amazon3.45·10-24.22·10-21.86·10-24.37·10-32.81·10-23.04·10-22.04·10-33.30·10-36.50·10-3
+ +Clothing, NDCG@5 of 1.5 points in Electronics, and MAP of 1.4 points in Home over the previous best method, which is MCR. In addition, our model also obtains better results than the best text-only RHP model, which is PRH-Net, with a gain of NDCG@3 of 9.8 points in Clothing, NDCG@5 of 4.3 points in Electronics, and MAP of 3.6 points in Home. Those results prove that our method can produce reasonable rankings for associated reviews. + +For Amazon dataset, which is written in English, our model outperforms MCR on all 3 categories, by NDCG@5 of 1.4 points in Clothing, 2.7 points in Electronics, and 1.2 points in Home, respectively. These results have verified that our interaction module and optimization approach can come up with more useful multimodal fusion than previous state-of-the-art baselines, not only in English context but other language one as well. + +We also perform significance tests to evaluate the statistical significance of our improvement on two datasets Amazon-MRHP and Lazada-MRHP, and note p-values in Table 5. As shown in the table, all of the p-values are smaller than 0.05, verifying the statistical significance in the enhancement of our method against prior best MRHP model, MCR (Liu et al., 2021b). + +# 4.5 Case Study + +In Table 1, we introduce an example of one product item and two reviews extracted from Electronics category of Amazon-MRHP dataset. Whereas MCR fails to predict relevant helpfulness scores, our model successfully produces sensible rankings for both of them. We hypothesize that our Multi-modal Interaction module learns more meaningful representations and Adaptive Contrastive Learning framework acquires more logical hidden states of relations among input elements. Thus, our model is able to generate more rational outcomes. + +# 4.6 Ablation Study + +In this section, we proceed to study the impact of (1) Adaptive Contrastive Learning framework and (2) Cross-modal Interaction module. + +Adaptive Contrastive Learning It is worth noting from Table 6 that plainly integrating contrastive learning brings less enhancement to the performance, with the improvement of NDCG@3 dropping 0.53 points in Lazada-MRHP dataset, NDCG@5 waning 0.84 points in Amazon-MRHP dataset. Furthermore, completely removing contrastive objective hurts performance, as NDCG@3 score decreasing 0.77 points in Lazada-MRHP, and MAP score declining 1.06 points in Amazon-MRHP. We hypothesize that the model loses the ability to learn efficient representations for cross-modal relations. + +Cross-modal Interaction In this ablation, we eliminate the cross-modal interaction module. As shown in Table 6, without the module, the improvement is downgraded, for instance, N@3 drops 1.89 points in Lazada-MRHP dataset, MAP shrinks 1.39 points in Amazon-MRHP dataset. It is hypothesized that without the module, the model is rigidly dependent upon the alignment nature among multimodal input elements, which brings about insensible modeling because in most cases, cross-modal elements are irrelevant to be bijectively mapped together. + +Table 5: Significance test of the results of our model against MCR model. + +
DatasetModelMAPN@3N@5
LazadaOur Model78.1572.4376.49
- w/o Adaptive Weighting77.9071.9075.97
- w/o Contrastive Objective77.6971.6675.85
- w/o Cross-modal Module77.3270.5474.86
AmazonOur Model56.4947.6250.79
- w/o Adaptive Weighting56.0346.9849.95
- w/o Contrastive Objective55.4346.3049.02
- w/o Cross-modal Module55.1045.6748.50
+ +Table 6: Ablation study in Electronics category of Lazada-MRHP and Amazon-MRHP datasets. + +# 4.7 Impact of Contrastive Learning on Cross-modal Relations + +Despite improved performances, it remains a quandary that whether the enhancement stems from more meaningful representations of input samples, which we hypothesize as a significant benefit of our contrastive learning framework. For deeper investigation, we decide to statistically measure + +
LabelModelIntra-modalInter-modalIntra-review
CSL2CSL2CSL2
1MCR0.785 ± 0.0023.852 ± 0.0670.843 ± 0.00211.719 ± 0.0010.845 ± 0.00214.631 ± 0.001
Our Model0.875 ± 0.0026.545 ± 0.0070.957 ± 0.00213.934 ± 0.0270.953 ± 0.00215.160 ± 0.036
4MCR0.533 ± 0.0041.014 ± 0.0510.712 ± 0.0109.476 ± 0.0010.617 ± 0.0018.519 ± 0.001
Our Model0.433 ± 0.0010.981 ± 0.0050.564 ± 0.0014.179 ± 0.0170.538 ± 0.0013.827 ± 0.020
+ +Table 7: Intra-modal, Inter-modal, and Intra-review distances in Home category of Lazada-MRHP dataset. + +
LabelModelIntra-modalInter-modalIntra-review
CSL2CSL2CSL2
1MCR0.785 ± 0.0068.532 ± 0.2920.686 ± 0.0019.696 ± 0.3000.880 ± 0.0029.620 ± 0.217
Our Model0.971 ± 0.00110.663 ± 0.7700.976 ± 0.00113.234 ± 0.4930.970 ± 0.00112.222 ± 0.431
4MCR0.697 ± 0.0093.045 ± 0.1390.624 ± 0.0013.179 ± 0.8300.781 ± 0.0015.098 ± 0.636
Our Model0.571 +/- 0.0011.572 +/- 0.0370.488 +/- 0.0011.460 +/- 0.0080.487 +/- 0.0013.555 +/- 0.001
+ +Table 8: Intra-modal, Inter-modal, and Intra-review distances in Home category of Amazon-MRHP dataset. + +distances among input samples using standard distance functions. Table 7 and 8 reveal the results of our experiment. In particular, we estimate the cosine distance (CS) and L2 distance (L2) between tokens of (1) product text - review text and product image - review image (intra-modal), (2) product text - review image and product image - review text (inter-modal), and (3) product text - product image and review text - review image (intra-review), then calculate the mean value of all samples. As it can be seen, our frameworks are more efficient in attracting elements of helpful pairs and repelling those of unhelpful pairs. + +# 5 Related Work + +# 5.1 Review Helpfulness Prediction + +Past works that pursue Review Helpfulness Prediction (RHP) dilemma follow text-only approaches. In general, they extract salient information, for instance lexical (Krishnamoorthy, 2015), argument (Liu et al., 2017), and emotional features (Martin and Pu, 2014) from reviews. Subsequently, these features are fed to a standard classifier such as Random Forest (Louppe, 2014) in order to produce the output score. Inspired by the meteoric development of computation resources, contemporary approaches seek to take advantage of deep learning techniques to tackle the RHP problem. For instance, Wang et al. (2017) propose multiperspective matching between review and product information via applying attention mechanism. Furthermore, Chen et al. (2018); Dai et al. (2018) adapt CNN models to learn textual representations in various views. + +In reality, review content are not only determined by texts but also other modalities. As a consequence, Fan et al. (2019) integrate metadata information of the target product into the prediction model. Abavisani et al. (2020) filter out uninformative signals before fusing various modalities. Moreover, Liu et al. (2021b) perform coherent reasoning to ascertain the matching level between product and numerous review items. + +# 5.2 Contrastive Estimation + +Different from architectural techniques such as Knowledge Distillation (Hinton et al., 2015; Hahn and Choi, 2019; Nguyen and Luu, 2022) or Variational AutoEncoder (Zhao et al., 2020; Nguyen et al., 2021; Nguyen and Luu, 2021; Wang et al., 2019), Contrastive Learning has been introduced as a representation-based but universal mechanism to enhance natural language processing performance. Proposed by Chopra et al. (2005), Contrastive Learning has been widely adopted in myriad problems of Natural Language Processing (NLP). + +As an approach to polish text representations, Gao et al. (2021); Zhang et al. (2021); Liu et al. (2021a); Nguyen and Luu (2021) employ contrastive loss to advance sentence embeddings and topic representations. For downstream tasks, Cao and Wang (2021) propose negative sampling strategies to generate noisy output so that the model can learn to distinguish correct summaries from incorrect ones in Document Summarization. For Spoken Question Answering (SQA), You et al. (2021) introduce augmentation algorithms in their contrastive learning stage so as to capture noisy-invariant rep + +resentations of utterances. Additionally, Ke et al. (2021) inherit the formulation of the contrastive objective to construct distillation loss which transfers knowledge of the previous task to the current one. Their proposals are to improve tasks in the Aspect Sentiment Classification domain. Unfortunately, despite the surge of interest in exercising contrastive learning for NLP, research works to adapt the method to the MRHP task have been scant. + +# 6 Conclusion + +In this paper, we propose methods to polish representation learning for the Multimodal Review Helpfulness Prediction task. In particular, we aim to advance cross-modal relation representations by learning mutual information through contrastive learning. In order to further enhance our framework, we propose an adaptive weighting strategy to encourage flexibility in optimization. Moreover, we integrate a cross-modal interaction module to loose the model's reliance on unalignment nature among modalities, continuing to refine multimodal representations. Our framework is able to outperform prior baselines and achieve state-of-the-art results on the MRHP problem. + +# 7 Limitations + +Despite the novelty and benefits of our method for Multimodal Review Helpfulness Prediction (MRHP) problem, it does include some drawbacks. Firstly, even though empirical results demonstrate that our approach not only works in English contexts, we have not conducted the verification in multilingual circumstances, in which product or review texts are written in different languages. If a model is corroborated to work efficiaciously in such contexts, it is capable of providing myriad benefits for practical implementation, for example, e-commerce applications can leverage such one single model for multiple cross-lingual scenarios. Furthermore, our work can also be extended to other domains. For instance, in movie assessment, we need to determine whether the review suits the material in the film, or visual scenes in the comment are consistent with the textual content. These would form our prospective future directions. + +Secondly, in the MRHP problem, there are several relationships that contrastive learning could exploit to burnish the performance. In particular, performing contrastive discrimination between two + +sets of reviews is able to furnish the model with useful set-based representations, which consolidate general knowledge for better helpfulness prediction. Similar insights are applicable for two sets of product information. At the moment, we leave such promising perspectives for future work. + +# 8 Acknowledgement + +This work was supported by Alibaba Innovative Research (AIR) programme with research grant AN-GC-2021-005. + +# References + +Mahdi Abavisani, Liwei Wu, Shengli Hu, Joel Tetreault, and Alejandro Jaime. 2020. Multimodal categorization of crisis events in social media. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14679-14689. +Abdalraheem Alsmadi, Shadi AlZu'bi, Mahmoud Al-Ayyoub, and Yaser Jararweh. 2020. Predicting helpfulness of online reviews. arXiv preprint arXiv:2008.10129. +Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2018. Bottom-up and top-down attention for image captioning and visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6077-6086. +Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the association for computational linguistics, 5:135-146. +Shuyang Cao and Lu Wang. 2021. Cliff: Contrastive learning for improving faithfulness and factuality in abstractive summarization. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6633-6649. +Cen Chen, Yinfei Yang, Jun Zhou, Xiaolong Li, and Forrest Bao. 2018. Cross-domain review helpfulness prediction based on convolutional neural networks with auxiliary domain discriminators. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 602-607. +Sumit Chopra, Raia Hadsell, and Yann LeCun. 2005. Learning a similarity metric discriminatively, with application to face verification. In 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), volume 1, pages 539-546. IEEE. +Zhuyun Dai, Chenyan Xiong, Jamie Callan, and Zhiyuan Liu. 2018. Convolutional neural networks + +for soft-matching n-grams in ad-hoc search. In Proceedings of the eleventh ACM international conference on web search and data mining, pages 126-134. +Miao Fan, Chao Feng, Lin Guo, Mingming Sun, and Ping Li. 2019. Product-aware helpfulness prediction of online reviews. In The World Wide Web Conference, pages 2715-2721. +Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. Simcse: Simple contrastive learning of sentence embeddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6894-6910. +Sangchul Hahn and Heeyoul Choi. 2019. Self-knowledge distillation in natural language processing. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2019), pages 423-430. +Wei Han, Hui Chen, Zhen Hai, Soujanya Poria, and Lidong Bing. 2022. Sancl: Multimodal review helpfulness prediction with selective attention and natural contrastive learning. arXiv preprint arXiv:2209.05040. +Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network (2015). arXiv preprint arXiv:1503.02531, 2. +Kalervo Järvelin and Jaana Kekäläinen. 2017. Ir evaluation methods for retrieving highly relevant documents. In ACM SIGIR Forum, volume 51, pages 243-250. ACM New York, NY, USA. +Zixuan Ke, Bing Liu, Hu Xu, and Lei Shu. 2021. Classic: Continual and contrastive learning of aspect sentiment classification tasks. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6871-6883. +Soo-Min Kim, Patrick Pantel, Timothy Chklovski, and Marco Pennacchiotti. 2006. Automatically assessing review helpfulness. In Proceedings of the 2006 Conference on empirical methods in natural language processing, pages 423-430. +Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. +Srikumar Krishnamoorthy. 2015. Linguistic features for review helpfulness prediction. Expert Systems with Applications, 42(7):3751-3759. +Che Liu, Rui Wang, Jinghua Liu, Jian Sun, Fei Huang, and Luo Si. 2021a. Dialoguecse: Dialogue-based contrastive learning of sentence embeddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 2396-2406. +Haijing Liu, Yang Gao, Pin Lv, Mengxue Li, Shiqiang Geng, Minglan Li, and Hao Wang. 2017. Using argument-based features to predict and analyse review helpfulness. arXiv preprint arXiv:1707.07279. + +Junhao Liu, Zhen Hai, Min Yang, and Lidong Bing. 2021b. Multi-perspective coherent reasoning for helpfulness prediction of multimodal reviews. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5927-5936. +Gilles Loupe. 2014. Understanding random forests: From theory to practice. arXiv preprint arXiv:1407.7502. +Lionel Martin and Pearl Pu. 2014. Prediction of helpful reviews using emotions extraction. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 28. +Thong Nguyen and Anh Tuan Luu. 2021. Contrastive learning for neural topic model. Advances in Neural Information Processing Systems, 34:11974-11986. +Thong Nguyen, Anh Tuan Luu, Truc Lu, and Tho Quan. 2021. Enriching and controlling global semantics for text summarization. arXiv preprint arXiv:2109.10616. +Thong Thanh Nguyen and Anh Tuan Luu. 2022. Improving neural cross-lingual abstractive summarization via employing optimal transport distance for knowledge distillation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 11103-11111. +Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532-1543. +Yao-Hung Hubert Tsai, Shaojie Bai, Paul Pu Liang, J Zico Kolter, Louis-Philippe Morency, and Ruslan Salakhutdinov. 2019. Multimodal transformer for unaligned multimodal language sequences. In Proceedings of the conference. Association for Computational Linguistics. Meeting, volume 2019, page 6558. NIH Public Access. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008. +Yue Wang, Jing Li, Hou Pong Chan, Irwin King, Michael R Lyu, and Shuming Shi. 2019. Topic-aware neural keyphrase generation for social media language. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2516-2526. +Zhiguo Wang, Wael Hamza, and Radu Florian. 2017. Bilateral multi-perspective matching for natural language sentences. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, pages 4144-4150. + +Nan Xu, Zhixiong Zeng, and Wenji Mao. 2020. Reasoning with multimodal sarcastic tweets via modeling cross-modality contrast and semantic association. In Proceedings of the 58th annual meeting of the association for computational linguistics, pages 3777-3786. +Chenyu You, Nuo Chen, and Yuexian Zou. 2021. Self-supervised contrastive cross-modality representation learning for spoken question answering. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 28-39. +Dejiao Zhang, Shang-Wen Li, Wei Xiao, Henghui Zhu, Ramesh Nallapati, Andrew O Arnold, and Bing Xi-ang. 2021. Pairwise supervised contrastive learning of sentence representations. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5786-5798. +He Zhao, Dinh Phung, Viet Huynh, Trung Le, and Wray Buntine. 2020. Neural topic model via optimal transport. In International Conference on Learning Representations. +Mohammadreza Zolfaghari, Yi Zhu, Peter Gehler, and Thomas Brox. 2021. Crossclr: Cross-modal contrastive learning for multi-modal video representations. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1450-1459. + +# A Hyperspherical Form of Adaptive Contrastive Loss + +We have the initial formulation of the adaptive contrastive loss + +$$ +\mathcal {L} _ {\text {A d a p t i v e C E}} = - \sum_ {i = 1} ^ {B} \epsilon_ {i} ^ {p} \cdot \operatorname {s i m} \left(\mathbf {t} _ {i} ^ {1}, \mathbf {t} _ {i} ^ {2}\right) + \sum_ {j = 1, k = 1, j \neq k} ^ {B} \epsilon_ {j, k} ^ {n} \cdot \operatorname {s i m} \left(\mathbf {t} _ {j} ^ {1}, \mathbf {t} _ {k} ^ {2}\right) \tag {36} +$$ + +We first substitute $\epsilon_{i}^{p} = [o^{p} - \mathrm{sim}(\mathbf{t}_{i}^{1},\mathbf{t}_{i}^{2})]_{+}$ and $\epsilon_{j,k}^{n} = [\mathrm{sim}(\mathbf{t}_{j}^{1},\mathbf{t}_{k}^{2}) - o^{n}]_{+}$ into the above equation, + +$$ +\begin{array}{l} \mathcal {L} _ {\text {A d a p t i v e C E}} = \sum_ {i = 1} ^ {B} \sin \left(\mathbf {t} _ {i} ^ {1}, \mathbf {t} _ {i} ^ {2}\right) ^ {2} - o ^ {p} \cdot \sin \left(\mathbf {t} _ {i} ^ {1}, \mathbf {t} _ {i} ^ {2}\right) + \sum_ {j = 1, k = 1, j \neq k} ^ {B} \sin \left(\mathbf {t} _ {i} ^ {1}, \mathbf {t} _ {i} ^ {2}\right) ^ {2} - o ^ {n} \cdot \sin \left(\mathbf {t} _ {j} ^ {1}, \mathbf {t} _ {k} ^ {2}\right) (37) \\ = \sum_ {i = 1} ^ {B} \left(\operatorname {s i m} \left(\mathbf {t} _ {i} ^ {1}, \mathbf {t} _ {i} ^ {2}\right) - \frac {o ^ {p}}{2}\right) ^ {2} + \sum_ {j = 1, k = 1, j \neq k} ^ {B} \left(\operatorname {s i m} \left(\mathbf {t} _ {j} ^ {1}, \mathbf {t} _ {k} ^ {2}\right) - \frac {o ^ {n}}{2}\right) ^ {2} - C (38) \\ \end{array} +$$ + +where $C = \left(\frac{o^p}{2}\right)^2 + \left(\frac{o^n}{2}\right)^2$ . Now we obtain the spherical form of our contrastive loss. \ No newline at end of file diff --git a/adaptivecontrastivelearningonmultimodaltransformerforreviewhelpfulnessprediction/images.zip b/adaptivecontrastivelearningonmultimodaltransformerforreviewhelpfulnessprediction/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..5dad224501368f1544d040479853afc8972903a4 --- /dev/null +++ b/adaptivecontrastivelearningonmultimodaltransformerforreviewhelpfulnessprediction/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e259fbc234f4273ca5a426e86a67a7280ffd9217008ebca11a1ab3845a66cf1b +size 685855 diff --git a/adaptivecontrastivelearningonmultimodaltransformerforreviewhelpfulnessprediction/layout.json b/adaptivecontrastivelearningonmultimodaltransformerforreviewhelpfulnessprediction/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..00209b438657d700d7732fd439a4163aaf80c040 --- /dev/null +++ b/adaptivecontrastivelearningonmultimodaltransformerforreviewhelpfulnessprediction/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1bb9883e89df867e0f6e84ea093332f14744b49fd1df5503652da032ffa0dae7 +size 421078 diff --git a/adaptivelabelsmoothingwithselfknowledgeinnaturallanguagegeneration/93c33a0e-671e-4c84-9539-f34002ef06a3_content_list.json b/adaptivelabelsmoothingwithselfknowledgeinnaturallanguagegeneration/93c33a0e-671e-4c84-9539-f34002ef06a3_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..ae3c3f1f90c9b6d25d4797815f7d86e6c3f741ac --- /dev/null +++ b/adaptivelabelsmoothingwithselfknowledgeinnaturallanguagegeneration/93c33a0e-671e-4c84-9539-f34002ef06a3_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b4ff1f9983f678acbb30a8f2477a7e8df228470b10db02466b27da86692bc7bd +size 85168 diff --git a/adaptivelabelsmoothingwithselfknowledgeinnaturallanguagegeneration/93c33a0e-671e-4c84-9539-f34002ef06a3_model.json b/adaptivelabelsmoothingwithselfknowledgeinnaturallanguagegeneration/93c33a0e-671e-4c84-9539-f34002ef06a3_model.json new file mode 100644 index 0000000000000000000000000000000000000000..0be5cab979c02ea6ff70eb5a6ef6433e4c975eaf --- /dev/null +++ b/adaptivelabelsmoothingwithselfknowledgeinnaturallanguagegeneration/93c33a0e-671e-4c84-9539-f34002ef06a3_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4faca7a1d5c5372bb46c702abd499e3e1e5fdf630bddb26b396852f8c4de28f8 +size 98093 diff --git a/adaptivelabelsmoothingwithselfknowledgeinnaturallanguagegeneration/93c33a0e-671e-4c84-9539-f34002ef06a3_origin.pdf b/adaptivelabelsmoothingwithselfknowledgeinnaturallanguagegeneration/93c33a0e-671e-4c84-9539-f34002ef06a3_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..e88983a944015b6538395676add95dbe84cc02dc --- /dev/null +++ b/adaptivelabelsmoothingwithselfknowledgeinnaturallanguagegeneration/93c33a0e-671e-4c84-9539-f34002ef06a3_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a4eea2c7be3efb285e741ca491832017ae8d14b0fdd349fe0d406866544a624d +size 1794926 diff --git a/adaptivelabelsmoothingwithselfknowledgeinnaturallanguagegeneration/full.md b/adaptivelabelsmoothingwithselfknowledgeinnaturallanguagegeneration/full.md new file mode 100644 index 0000000000000000000000000000000000000000..f06008d9a28753fa22971f7c860990394d59b24e --- /dev/null +++ b/adaptivelabelsmoothingwithselfknowledgeinnaturallanguagegeneration/full.md @@ -0,0 +1,384 @@ +# Adaptive Label Smoothing with Self-Knowledge in Natural Language Generation + +Dongkyu Lee $^{1,2*}$ Ka Chun Cheung $^{2}$ Nevin L. Zhang $^{1}$ + +$^{1}$ Department of Computer Science and Engineering, HKUST + +$^{2}$ NVIDIA AI Technology Center, NVIDIA + +dleear@cse.ust.hk chcheung@nvidia.com lzhang@cse.ust.hk + +# Abstract + +Overconfidence has been shown to impair generalization and calibration of a neural network. Previous studies remedy this issue by adding a regularization term to a loss function, preventing a model from making a peaked distribution. Label smoothing smoothes target labels with a pre-defined prior label distribution; as a result, a model is learned to maximize the likelihood of predicting the soft label. Nonetheless, the amount of smoothing is the same in all samples and remains fixed in training. In other words, label smoothing does not reflect the change in probability distribution mapped by a model over the course of training. To address this issue, we propose a regularization scheme that brings dynamic nature into the smoothing parameter by taking model probability distribution into account, thereby varying the parameter per instance. A model in training self-regulates the extent of smoothing on the fly during forward propagation. Furthermore, inspired by recent work in bridging label smoothing and knowledge distillation, our work utilizes self-knowledge as a prior label distribution in softening target labels, and presents theoretical support for the regularization effect by knowledge distillation and the dynamic smoothing parameter. Our regularizer is validated comprehensively, and the result illustrates marked improvements in model generalization and calibration, enhancing robustness and trustworthiness of a model. + +# 1 Introduction + +In common practice, a neural network is trained to maximize the expected likelihood of observed targets, and the gradient with respect to the objective updates the learnable model parameters. With hard targets (one-hot encoded), the maximum objective can be approached when a model assigns a high probability mass to the corresponding target label + +over the output space. That is, due to the normalizing activation functions (i.e. softmax), a model is trained in order for logits to have a marked difference between the target logit and the other classes logits (Müller et al., 2019). + +Despite its wide application and use, the maximum likelihood estimation with hard targets has been found to incur an overconfident problem; the predictive score of a model does not reflect the actual accuracy of the prediction. Consequently, this leads to degradation in model calibration (Pereyra et al., 2017), as well as in model performance (Muller et al., 2019). Additionally, this problem stands out more clearly with a limited number of samples, as a model is more prone to overfitting. To remedy such phenomenon, Szegedy et al. (2016) proposed label smoothing, in which one-hot encoded targets are replaced with smoothed targets. Label smoothing has boosted performance in computer vision (Szegedy et al., 2016), and has been highly preferred in other domains, such as Natural Language Processing (Vaswani et al., 2017; Lewis et al., 2020). + +However, there are several aspects to be discussed in label smoothing. First, it comes with certain downsides, namely the static smoothing parameter. The smoothing regularizer fails to account for the change in probability mass over the course of training. Despite the fact that a model can benefit from adaptive control of the smoothing extent depending on the signs of overfitting and overconfidence, the smoothing parameter remains fixed throughout training in all instances. + +Another aspect of label smoothing to be considered is its connection to knowledge distillation (Hinton et al., 2015). There have been attempts to bridge label smoothing and knowledge distillation, and the findings suggest that the latter is an adaptive form of the former (Tang et al., 2021; Yuan et al., 2020). However, the regularization effect on overconfidence by self-knowledge distillation is + +still poorly understood and explored. + +To tackle the issues mentioned above, this work presents adaptive label smoothing with self-knowledge as a prior label distribution. Our regularizer allows a model to self-regulate the extent of smoothing based on the entropic level of model probability distribution, varying the amount per sample and per time step. Furthermore, our theoretical analysis suggests that self-knowledge distillation and the adaptive smoothing parameter have a strong regularization effect by rescaling gradients on logit space. To the best of our knowledge, our work is the first attempt in making both smoothing extent and prior label distribution adaptive. Our work validates the efficacy of the proposed regularization method on machine translation tasks, achieving superior results in model performance and model calibration compared to other baselines. + +# 2 Preliminaries & Related Work + +# 2.1 Label Smoothing + +Label smoothing (Szegedy et al., 2016) was first introduced to prevent a model from making a peaked probability distribution. Since its introduction, it has been in wide application as a means of regularization (Vaswani et al., 2017; Lewis et al., 2020). In label smoothing, one-hot encoded ground-truth label $(\pmb{y})$ and a pre-defined prior label distribution $(q)$ are mixed with the weight, the smoothing parameter $(\alpha)$ , forming a smoothed ground-truth label. A model with label smoothing is learned to maximize the likelihood of predicting the smoothed label distribution. Specifically, + +$$ +\begin{array}{l} \mathcal {L} _ {l s} = - \sum_ {i = 1} ^ {| C |} (1 - \alpha) y _ {i} ^ {(n)} \log P _ {\theta} \left(y _ {i} \mid \boldsymbol {x} ^ {(n)}\right) \tag {1} \\ + \alpha q _ {i} \log P _ {\theta} (y _ {i} | \boldsymbol {x} ^ {(n)}) \\ \end{array} +$$ + +$|C|$ denotes the number of classes, $(n)$ the index of a sample in a batch, and $P_{\theta}$ the probability distribution mapped by a model. $\alpha$ is commonly set to 0.1, and remains fixed throughout training (Vaswani et al., 2017; Lewis et al., 2020). A popular choice of $\pmb{q}$ is an uniform distribution $(\pmb{q} \sim U(|C|))$ , while unigram distribution is another option for dealing with an imbalanced label distribution (Vaswani et al., 2017; Szegedy et al., 2016; Muller et al., 2019; Pereyra et al., 2017). The pre-defined prior label distribution remains unchanged, hence the latter cross-entropy term in Equation 1 is equivalent to minimizing the KL divergence between the + +model prediction and the pre-defined label distribution. In line with the idea, Pereyra et al. (2017) proposed confidence penalty (ConfPenalty) that adds negative entropy term to the loss function, thereby minimizing the KL divergence between the uniform distribution and model probability distribution. Ghoshal et al. (2021) proposed low-rank adaptive label smoothing (LORAS) that jointly learns a noise distribution for softening targets and model parameters. Li et al. (2020); Krothapalli and Abbott (2020) introduced smoothing schemes that are data-dependent. + +# 2.2 Knowledge Distillation + +Knowledge distillation (Hinton et al., 2015) aims to transfer the dark knowledge of (commonly) a larger and better performing teacher model to a student model (Buciluundefined et al., 2006). The idea is to mix the ground-truth label with the model probability distribution of a teacher model, resulting in an adaptive version of label smoothing (Tang et al., 2021). + +$$ +\begin{array}{l} \mathcal {L} _ {k d} = - \sum_ {i = 1} ^ {| C |} (1 - \alpha) y _ {i} ^ {(n)} \log P _ {\theta} \left(y _ {i} \mid \boldsymbol {x} ^ {(n)}\right) \tag {2} \\ + \alpha \bar {P} _ {\phi} (y _ {i} | \pmb {x} ^ {(n)}) \log \bar {P} _ {\theta} (y _ {i} | \pmb {x} ^ {(n)}) \\ \end{array} +$$ + +$\phi$ and $\theta$ denote the parameters of a teacher model and a student model respectively. $\bar{P}$ indicates a probability distribution smoothed with a temperature. Similar to label smoothing, $\phi$ remains unchanged in training; thus a student model is learned to minimize the KL divergence between its probability distribution and that of the teacher model. When $\bar{P}_{\phi}$ follows a uniform distribution with the temperature set to 1, the loss function of knowledge distillation is identical to that of uniform label smoothing. + +Training a large teacher model can be computationally expensive; for this reason, there have been attempts to replace the teacher model with the student model itself, called self-knowledge distillation (Zhang et al., 2019; Yuan et al., 2020; Kim et al., 2021; Zhang and Sabuncu, 2020). TF-KD (Yuan et al., 2020) trains a student with a pre-trained teacher that is identical to the student in terms of structure. SKD-PRT (Kim et al., 2021) utilizes the previous epoch checkpoint as a teacher with linear increase in $\alpha$ . (Zhang and Sabuncu, 2020) incorporates beta distribution sampling (BETA) and self-knowledge distillation (SD), and introduce instance-specific prior label distribution. (Yun + +et al., 2020) utilizes self-knowledge distillation to minimize the predictive distribution of samples with the same class, encouraging consistent probability distribution within the same class. + +# 3 Approach + +The core components of label smoothing are twofold: smoothing parameter $(\alpha)$ and prior label distribution. The components determine how much to smooth the target label using which distribution, a process that requires careful choice of selection. In this section, we illustrate how to make the smoothing parameter adaptive. We also demonstrate how our adaptive smoothing parameter and self-knowledge distillation as a prior distribution act as a form of regularization with theoretical analysis on the gradients. + +# 3.1 Adaptive $\alpha$ + +An intuitive and ideal way of softening the hard target is to bring dynamic nature into choosing $\alpha$ ; a sample with low entropic level in model prediction, an indication of peaked probability distribution, receives a high smoothing parameter to further smooth the target label. In another scenario, in which high entropy of model prediction (flat distribution) is seen, the smoothing factor is decreased. + +With the intuition, our method computes the smoothing parameter on the fly during the forward propagation in training, relying on the entropic level of model probability distribution per sample, and per time step in case of sequential classification. + +$$ +H \left(P _ {\theta} \left(\boldsymbol {y} \mid \boldsymbol {x} ^ {(n)}\right)\right) = - \sum_ {i = 1} ^ {| C |} P _ {\theta} \left(y _ {i} \mid \boldsymbol {x} ^ {(n)}\right) \tag {3} +$$ + +$$ +\log P _ {\theta} \left(y _ {i} \mid \boldsymbol {x} ^ {(n)}\right) +$$ + +The entropy quantifies the level of probability mass distributed across the label space; therefore, low entropy is an indication of overfitting and overconfidence (Pereyra et al., 2017; Meister et al., 2020). + +Since entropy does not have a fixed range between 0 and 1, one simple scheme is to normalize the entropy with maximum entropy $(\log |C|)$ . Hence, the normalization is capable of handling variable size of class set among different datasets. + +$$ +\alpha^ {(n)} = 1 - \frac {H \left(P _ {\theta} \left(\boldsymbol {y} \mid \boldsymbol {x} ^ {(n)}\right)\right)}{\log | C |} \tag {4} +$$ + +With this mechanism, a sample with high entropy is trained with low $\alpha$ , and a sample with low entropy receives high $\alpha$ . The computation for $\alpha$ is excluded from the computation graph for the gradient calculation, hence, the gradient does not flow through adaptive $\alpha^{(n)}$ . + +There are two essential benefits of adopting the adaptive smoothing parameter. As the smoothing extent is determined by its own probability mass over the output space, the hyperparameter search for $\alpha$ is removed. Furthermore, it is strongly connected to the gradient rescaling effect on self-knowledge distillation, which will be dealt in Section 3.3 in detail. + +# 3.2 Self-Knowledge As A Prior + +Similar to (Kim et al., 2021; Liu et al., 2021), our regularizer loads a past student model checkpoint as teacher network parameters in the course of training, though with a core difference in the selection process. The intuition is to utilize past self-knowledge which generalizes well, thereby hindering the model from overfitting to observations in the training set. + +$$ +\phi_ {t} = \underset {\theta_ {i} \in \Theta_ {t}} {\operatorname {a r g m a x}} g \left(f \left(X ^ {\prime}; \theta_ {i}\right), Y ^ {\prime}\right) \tag {5} +$$ + +$\Theta_{t}$ is a set of past model checkpoints up to the current epoch $t$ in training, and function $f$ is a specific task, which in our work is machine translation. $X^{\prime}$ and $Y^{\prime}$ are sets of input and ground-truth samples from a validation dataset, and the function $g$ could be any proper evaluation metric for model generalization (i.e. accuracy). Our work utilizes the $n$ -gram matching score, BLEU (Papineni et al., 2002) being the function $g$ for finding the suitable prior label distribution. + +Equation 5 depicts how the selection process of a self-teacher depends on the generalization of each past epoch checkpoint. In other words, a past checkpoint with the least generalization error is utilized as the self-teacher, a source of self-knowledge, to send generalized supervision. Furthermore, at every epoch, with Equation 5, the proposed approach replaces the self-teacher with the one with the best generalization. + +Combining the adaptive smoothing parameter and self-knowledge as a prior distribution, our loss + +![](images/bd8d5557996f45596919f3d83dc6a44a877cb3f1a1fbbd4af0ca1df5c3c3874f.jpg) +Figure 1: Overview of the proposed regularization. $d$ , $|N|$ and $t$ are input dimension size, batch size, and current epoch respectively. Time step is not described in the figure, yet one can easily extend the above to sequential classification tasks. + +function is as follows: + +$$ +\begin{array}{l} \mathcal {L} = - \sum_ {i = 1} ^ {| C |} (1 - \alpha^ {(n)}) y _ {i} ^ {(n)} \log P _ {\theta} \left(y _ {i} \mid \boldsymbol {x} ^ {(n)}\right) \tag {6} \\ + \alpha^ {(n)} P _ {\phi} (y _ {i} | \pmb {x} ^ {(n)}) \log P _ {\theta} (y _ {i} | \pmb {x} ^ {(n)}) \\ \end{array} +$$ + +The core differences to the previous approaches are the introduction of 1) instance-specific $\alpha$ and 2) self-teacher with the least generalization error in training. + +# 3.3 Gradient Analysis + +Tang et al. (2021); Kim et al. (2021) theoretically find that the success of knowledge distillation is related to the gradient rescaling in the logit space; the difficulty of a sample determines the rescaling factor, and difficult-to-learn samples receive higher rescaling factors than those of the easy-to-learn samples. We further extend the gradient analysis in the perspective of regularization effect and the direction of the gradient, and discuss the importance of the adaptive smoothing parameter. + +Before dissecting the gradients, we first set a hypothesis: teacher network makes a less confident prediction than that of the student. In self-knowledge distillation with a past checkpoint as the teacher, the assumption is valid. The expected predictive score on target label by the teacher model is + +lower than that of the current checkpoint in training (Kim et al., 2021). + +The gradient with respect to the logit $(z)$ by the cross entropy loss $(\mathcal{L}_{ce})$ is as follows: + +$$ +\frac {\partial \mathcal {L} _ {c e}}{\partial z _ {i}} = P _ {\theta} (y _ {i}) - y _ {i} \tag {7} +$$ + +With knowledge distillation $(\mathcal{L}_{kd})$ , the gradient on the logit is + +$$ +\frac {\partial \mathcal {L} _ {k d}}{\partial z _ {i}} = (1 - \alpha) \left(P _ {\theta} \left(y _ {i}\right) - y _ {i}\right) + \alpha \left(P _ {\theta} \left(y _ {i}\right) - P _ {\phi} \left(y _ {i}\right)\right) \tag {8} +$$ + +The following compares the ratio of the gradient from knowledge distillation and with that of the cross entropy. + +$$ +\frac {\partial \mathcal {L} _ {k d} / \partial z _ {i}}{\partial \mathcal {L} _ {c e} / \partial z _ {i}} = (1 - \alpha) + \alpha \frac {P _ {\theta} (y _ {i}) - P _ {\phi} (y _ {i})}{P _ {\theta} (y _ {i}) - y _ {i}} \tag {9} +$$ + +When $i = j$ , with $j$ being the index of the ground truth, it is worth noting that the denominator of the second term in Equation 9 has range $P_{\theta}(y_i) - 1 \in [-1,0]$ , and the range of the numerator is confined to $P_{\theta}(y_i) - P_{\phi}(y_i) \in [0,1]$ . Therefore, the equation can be written as + +$$ +\frac {\partial \mathcal {L} _ {k d} / \partial z _ {i}}{\partial \mathcal {L} _ {c e} / \partial z _ {i}} = (1 - \alpha) - \alpha \left| \frac {P _ {\theta} \left(y _ {i}\right) - P _ {\phi} \left(y _ {i}\right)}{P _ {\theta} \left(y _ {i}\right) - 1} \right| \tag {10} +$$ + +The norm of the gradient drastically diminishes when there is a large difference between the predictions by the models, and when the predictive score of a student model is high, which is a sign of overconfidence. In terms of the direction of the gradient, when the following is seen, + +$$ +(1 - \alpha) < \alpha \left| \frac {P _ {\theta} \left(y _ {i}\right) - P _ {\phi} \left(y _ {i}\right)}{P _ {\theta} \left(y _ {i}\right) - 1} \right| \tag {11} +$$ + +the direction of the gradients with respect to knowledge distillation becomes the opposite to that of the cross entropy, pushing parameters to lower the likelihood of the target index. + +The same applies when $i$ is the index of an incorrect class ( $i \neq j$ ). From Equation 9, the following can be derived. + +$$ +\frac {\partial \mathcal {L} _ {k d} / \partial z _ {i}}{\partial \mathcal {L} _ {c e} / \partial z _ {i}} = 1 - \alpha \frac {P _ {\phi} (y _ {i})}{P _ {\theta} (y _ {i})} \tag {12} +$$ + +With the generalized teacher, the expected predictive score on the incorrect labels by the teacher model is higher than that of the student model. Therefore, in addition to the shrinking norm effect, the direction of the gradient can be reversed when $1 < \alpha \frac{P_{\phi}(y_i)}{P_{\theta}(y_i)}$ , similar to Equation 11; as a result, it leads to updating model parameters to increase the likelihood on the incorrect classes, an opposite behavior to that of the cross entropy with hard targets. Overall, in either case, the theoretical support depicts strong regularization effects with the generalized supervision by the teacher. + +Connection to Label Smoothing As label smoothing is also closely linked to knowledge distillation, the theoretical support can be easily extended to label smoothing if $P_{\phi}(y_i)$ is replaced with $\frac{\alpha}{|C|}$ in case of uniform label smoothing, and $P(c_i)$ in unigram label smoothing. + +Importance of Adaptive $\alpha$ The adaptive $\alpha$ is another factor to be discussed regarding the gradient analysis. As clearly demonstrated in Equation 11 and 12, a high $\alpha$ , an indication of peak probability distribution, not only leads to drastic decrease in the gradient norm, but it is likely to make the gradient go the opposite direction to that of the cross entropy. It enforces a student to distribute the probability mass more evenly on output space, as opposed to the effect of the cross entropy with hard targets. Furthermore, as the parameters updates are performed by aggregating the losses of samples, adaptive smoothing acts as a gradient rescaling + +mechanism. Hence, the following proposition can be made. + +Proposition 1. Given any two samples $(\mathbf{x}^{(i)},\mathbf{y}^{(i)}),$ $(\mathbf{x}^{(k)},\mathbf{y}^{(k)})\in \mathcal{X}\times \mathcal{Y}$ and $P_{\theta}(y_j^{(i)}|\mathbf{x}^{(i)}) =$ $P_{\theta}(y_j^{(k)}|\mathbf{x}^{(k)})$ , the average gradient rescaling factor w for all classes is greater on sample with high probability entropy than that of the one with low probability entropy. + +For details, please refer to Appendix A. The gradient rescaling by adaptive $\alpha$ reweights the gradients in aggregating the losses, hence, the proposed method prioritizes on learning samples with high entropy, less confident instances. The use of adaptive $\alpha$ is not only intuitive in terms of tackling overconfidence, but it also serves as an important aspect in the theoretical support. + +# 4 Experiment + +# 4.1 Dataset & Experiment Setup + +We validate the proposed regularizer on three popular translation corpora: IWSLT14 German-English (DE-EN) (Cettolo et al., 2014), IWSLT15 English-Vietnamese (EN-VI) (Cettolo et al., 2015), and Multi30K German-English pair (Elliott et al., 2016). The details can be found in Appendix C. + +The core reason for conducting experiments on translation comes from one aspect of natural language: the presence of intrinsic uncertainty (Ott et al., 2018). In a natural language, synonyms can be used interchangeably in a sentence as they denote the same meaning. Such uncertainty is not reflected in one-hot encoded form. Hence, a model in a natural language generation task can benefit from inter-class relations held within a knowledge (Hinton et al., 2015). + +All of the experiments are conducted with transformer architecture (Vaswani et al., 2017) on a Telsa V100. For generation, beam size is set to 4 in the inference stage. The training configuration follows the instruction of fairseq (Ott et al., 2019). For the quality of the generated outputs, we report the popular metrics for machine translation: BLEU (Papineni et al., 2002), METEOR (Banerjee and Lavie, 2005), Word Error Rate (WER), ROUGE-L (Lin, 2004), and NIST (Doddington, 2002). + +Table 1: The scores are reported in percentage and are averaged over three runs with different random seeds. Except for the results from the cross entropy with hard targets, denoted as Base hereinafter, other scores are absolute difference from those of Base. Bold numbers indicate the best performance among the methods. + +
CorpusMethodBLEU(↑)METEOR(↑)WER(↓)ROUGE-L(↑)NIST(↑)
Multi30K DE→ENBase40.6473.3138.7669.217.96
Uniform LS+1.90+1.47-0.76+0.96+0.12
Unigram LS+1.87+1.30-1.22+1.09+0.17
ConfPenalty+2.50+1.72-0.81+1.33+0.15
LORAS+1.14+1.00-0.73+0.63+0.16
TF-KD+1.13+1.02-1.21+0.73+0.15
SKD-PRT+1.31+0.95-0.31+0.54+0.09
BETA+1.26+0.94-0.20+0.46+0.07
SD+2.76+2.09-1.26+1.55+0.18
Ours+3.75+2.91-2.19+2.17+0.32
IWSLT15 EN→VIBase30.1759.0954.0263.917.18
Uniform LS+0.57+0.62-0.40+0.42+0.05
Unigram LS+0.62+0.48-0.79+0.43+0.08
ConfPenalty+0.93+0.56-1.02+0.59+0.12
LORAS-0.04-0.10-0.23-0.19+0.02
TF-KD-0.01-0.15-0.01-0.08-0.02
SKD-PRT+1.03+0.95-1.31+0.80+0.16
BETA+0.30+0.20-0.63+0.17+0.07
SD+0.68+0.46-0.63+0.45+0.08
Ours+1.37+1.14-1.80+1.05+0.21
IWSLT14 DE→ENBase35.9664.7048.1761.828.47
Uniform LS+0.86+0.61-0.67+0.59+0.14
Unigram LS+1.01+0.68-0.87+0.76+0.16
ConfPenalty+1.15+0.86-1.08+0.85+0.19
LORAS+0.36+0.23+0.61+0.16-0.03
TF-KD+0.39+0.19-0.33+0.29+0.06
SKD-PRT+1.53+1.11-1.69+1.25+0.24
BETA+0.95+0.69-0.30+0.58+0.08
SD+1.39+1.06-0.84+0.88+0.16
Ours+1.86+1.55-2.08+1.59+0.32
+ +# 4.2 Experimental Result & Analysis + +Automatic evaluation results on the three test datasets are shown in Table 1. Though most of the methods achieve meaningful gains, the most noticeable difference is seen with our method. Our regularization scheme shows solid improvements on all of the metrics on the datasets without any additional learnable parameter. For example, the absolute gain in BLEU compared to the base method in Multi30K dataset is around 3.75, which is $9.2\%$ relative improvement. Not only does our method excel in $n$ -gram matching score, but it shows superior performance in having the longest common subsequence with the reference text, as well as in the informativeness of the $n$ -grams. The empirical result demonstrates that our regularizer improves the base method across all the metrics by a large margin. + +In Figure 2, the changes in $\alpha$ during training are visualized. As expected, the smoothing parameters start with a very small number, as the entropic level must be high due to the under-fitted models. As training continues, the predictive scores of the models increase, and accordingly, adaptive $\alpha$ increases to prevent overconfidence. One notable aspect is the convergence at a certain level. Each training of the corpora ends up with a different $\alpha$ , and the model in training self-regulates the amount of smoothing and the value converges. + +Furthermore, our adaptive $\alpha$ affects the norm of the gradients as depicted in Figure 3. The gradient norm of our regularizer is considerably smaller than that of the other methods. This empirical finding mainly conforms with the gradient analysis in Section 3.3, where the importance of adaptive $\alpha$ and generalized teacher model are discussed. + +Table 2: We report Expected Calibration Error (ECE) and Maximum Calibration Error (MCE), in percentage, on the test sets of the corpora for evaluating the calibration ability. + +
MethodMulti30K DE→ENIWSLT15 EN→VIIWSLT14 DE→EN
ECE (↓)MCE (↓)ECE (↓)MCE (↓)ECE (↓)MCE (↓)
Base14.9526.0114.0520.3812.9819.29
Uniform LS9.1717.228.5312.136.439.98
Unigram LS9.1217.787.8911.716.129.46
ConfPenalty48.2173.4643.9459.2848.1957.58
LORAS20.2740.8612.4119.1510.5415.29
TF-KD21.1842.8713.3019.2912.2017.60
SKD-PRT14.7526.699.3414.185.638.88
BETA11.7121.909.5714.978.6313.21
SD6.8712.385.019.647.8213.71
Ours4.7612.412.154.401.763.64
+ +![](images/e21da4b6647a331d666f3dc631fe170cad3157b8b52e6e23c1018adf52af68b4.jpg) +Figure 2: Illustration of the changes in the proposed smoothing parameter $\alpha$ throughout the training on the tested corpora. + +![](images/ff7cb8de56dc2fa729792ebdcc4befa565b710f865f1dc39eac227a401013932.jpg) +Figure 3: Change in the gradient norm of the baseline methods and the proposed approach on IWSLT15 EN $\rightarrow$ VI corpus. + +# 4.2.1 Model Calibration + +In addition to the automatic evaluation, in which the improved generalization is seen through the performance gains, we look into the calibrations of the models trained with the methods. Figure 4(a) depicts how the cross entropy with hard targets tends to make a model overconfident in prediction. In the reliability diagram, the confidence score in each bin is larger than the corresponding accuracy, the gap of which is fairly noticeable. Label smoothing mitigates the problem to some extent, yet the gap between the accuracy and the confidence score still remains clear. We empirically find that models trained with the baseline methods suffer from either overconfidence or underconfidence. On the other hand, the proposed regularizer significantly reduces the gap, showing the enhanced model calibration. As clearly depicted in Figure 4(c), the confidence level of each bin mainly conforms with the accu + +racy, demonstrating reliable predictions made by the model trained with the proposed approach. + +The improvement in calibration is more clear with expected calibration error (ECE) and maximum calibration error (MCE) reported in Table 2. For instance, on the IWSLT14 dataset, the errors with label smoothing drop significantly in both metrics, which is around $6\%$ absolute decrease in ECE and $10\%$ in MCE. Nonetheless, ECE of the proposed method results in $1.76\%$ which is around $11\%$ absolute decrease and $86\%$ relative improvement. In addition, our method achieves $3.64\%$ in MCE, which is $81\%$ relative improvement over the base method. The improved calibration with our method is seen across the datasets, confirming the effectiveness of our system in enhancing model calibration. + +One important finding is the gap between the performance in model generalization and the cal + +![](images/781d849e9a90c087568c65253b107c0681140644debca2a512c07170e9af1d57.jpg) +(a) Base + +![](images/9d3076549648fa55be84802e6a45a36d5f05204a39c2070cd54a3844d748dd80.jpg) +(b) Uniform Label Smoothing +Figure 4: Reliability diagram of Base method, uniform label smoothing, and ours. Predictions on IWSLT14 DE→EN test set are binned to 10 groups based on the predictive scores. Each bar indicates the average confidence score and accuracy of each bin. + +![](images/ff5bc960aecbd27200a58a291b225f851c7b419ffaf89f4248fa97d266e2719b.jpg) +(c) Ours + +Table 3: $(+)$ denotes adding the following components to the base method. $\alpha^{(n)}$ denotes our adaptive $\alpha$ , and $\alpha^{\uparrow}$ indicates a linear increase in $\alpha$ in the course of training. SK and Uniform denote Self-Knol wedge and Uniform distribution as a prior label distribution respectively. $g_{\mathrm{NLL}}$ and $g_{\mathrm{BLEU}}$ indicate $g$ function set to negative log likelihood and BLEU respectively. + +
MethodBLEU (↑)ECE (↓)
Base35.9612.98
(+) Fixed α & SK36.2713.56
(+) α(n) & Uniform37.3018.76
(+) α^† & SK37.525.58
Ours (gNLL)37.741.30
Ours (gBLEU)37.821.76
+ +ization error. Confidence penalty (Pereyra et al., 2017) is highly competitive in $n$ -gram matching scores (BLEU) on all of the dataset tested. Nevertheless, the calibration error is the highest among the methods due to underconfidence. Similar to the finding in (Guo et al., 2017), the discrepancy between the performance and model calibration exists, and it calls for caution in training a neural network when considering model calibration. + +# 4.3 Ablation Study + +Table 3 shows the change in performance when adding our core components to the base method on the IWSLT14 dataset. When using the fixed smoothing parameter with self-knowledge as a prior, the BLEU score increases by a small margin, and the ECE does not drop significantly. In another case where the smoothing parameter is adaptive, and the prior label distribution is set to uniform distribution, there is a meaningful increase in BLEU score. However, it impairs the ECE score notice + +ably. We empirically find that the result mainly comes from underconfidence of a model. The confidence score is largely lower than that of the accuracy. In an experiment with linearly increasing smoothing parameter $\alpha^{\uparrow}$ with self-knowledge prior, the BLEU score improves by around 1.6 score, yet the ECE score still shows room for improvement. Since $\alpha$ value is shared among samples in the experiment, there is no gradient rescaling by adaptive $\alpha$ which may explain ECE score being high compared to that of our adaptive $\alpha$ . We also look into a case with a different $g$ function: BLEU and Negative Log Likelihood (NLL). We observe that both $g_{\mathrm{BLEU}}$ and $g_{\mathrm{NLL}}$ greatly enhance the scores. As $g$ has the purpose of selecting a self-teacher with the least generalization error from the set of past checkpoints, a proper metric would serve the purpose. In conclusion, while the adaptive $\alpha$ plays an important role in regularizing a model, both the adaptive $\alpha$ and the choice of prior label distribution greatly affect model calibration. + +# 5 Conclusion & Future Work + +In this work, we propose a regularization scheme that dynamically smooths the target label with self-knowledge. Our regularizer self-regulates the amount of smoothing with respect to the entropic level of the model probability distribution, making the smoothing parameter dynamic per sample, and per time step. The given idea is theoretically supported by gradient rescaling and direction, and the finding is backed up by the empirical results, both in model performance and calibration. + +# Limitation + +The proposed regularization method is model driven, and hence it adds additional computation cost when self-knowledge is computed. This limitation, however, does not pertain to our work but is shared in KD training. This issue can be mitigated to some extent when self-knowledge is obtained and saved before training a model. In addition, the smoothing technique requires additional computation for computing the instance-specific smoothing term (normalized entropy). + +# Acknowledgement + +Research on this paper was supported by Hong Kong Research Grants Council (Grant No. 16204920). + +# References + +Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pages 65-72, Ann Arbor, Michigan. Association for Computational Linguistics. +Cristian Buciluundefined, Rich Caruana, and Alexandru Niculescu-Mizil. 2006. Model compression. In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '06, pages 535-541, New York, NY, USA. Association for Computing Machinery. +Mauro Cettolo, Jan Niehues, Sebastian Stüker, Luisa Bentivogli, Roldano Cattoni, and Marcello Federico. 2015. The iwslt 2015 evaluation campaign. In IWSLT. +Mauro Cettolo, Jan Niehues, Sebastian Stüker, Luisa Bentivogli, and Marcello Federico. 2014. Report on the 11th iwslt evaluation campaign. In IWSLT. +George Doddington. 2002. Automatic evaluation of machine translation quality using n-gram co-occurrence statistics. In Proceedings of the Second International Conference on Human Language Technology Research, HLT '02, pages 138-145, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. +Desmond Elliott, Stella Frank, Khalil Sima'an, and Lucia Specia. 2016. Multi30k: Multilingual english-german image descriptions. In Proceedings of the 5th Workshop on Vision and Language, pages 70-74. Association for Computational Linguistics. +Asish Ghoshal, Xilun Chen, Sonal Gupta, Luke Zettlemoyer, and Yashar Mehdad. 2021. Learning better + +structured representations using low-rank adaptive label smoothing. In International Conference on Learning Representations. +Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. 2017. On calibration of modern neural networks. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 1321-1330. PMLR. +Geoffrey Hinton, Oriol Vinyals, and Jeffrey Dean. 2015. Distilling the knowledge in a neural network. In NIPS Deep Learning and Representation Learning Workshop. +Kyungyul Kim, ByeongMoon Ji, Doyoung Yoon, and Sangheum Hwang. 2021. Self-knowledge distillation with progressive refinement of targets. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 6567-6576. +Diederick P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR). +Ujwal Krothapalli and A. Lynn Abbott. 2020. Adaptive label smoothing. CoRR, abs/2009.06432. +Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics. +Weizhi Li, Gautam Dasarathy, and Visar Berisha. 2020. Regularization via structural label smoothing. In The 23rd International Conference on Artificial Intelligence and Statistics, AISTATS 2020, 26-28 August 2020, Online [Palermo, Sicily, Italy], volume 108 of Proceedings of Machine Learning Research, pages 1453-1463. PMLR. +Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics. +Yang Liu, Sheng Shen, and Mirella Lapata. 2021. Noisy self-knowledge distillation for text summarization. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 692-703, Online. Association for Computational Linguistics. +Clara Meister, Elizabeth Salesky, and Ryan Cotterell. 2020. Generalized entropy regularization or: There's nothing special about label smoothing. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Seattle, USA. Association for Computational Linguistics. + +Rafael Müller, Simon Kornblith, and Geoffrey E Hinton. 2019. When does label smoothing help? In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc. + +Myle Ott, Michael Auli, David Grangier, and Marc'Aurelio Ranzato. 2018. Analyzing uncertainty in neural machine translation. In International Conference on Machine Learning. + +Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. *fairoseq: A fast, extensible toolkit for sequence modeling*. In *Proceedings of NAACL-HLT* 2019: Demonstrations. + +Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. + +Gabriel Pereyra, George Tucker, Jan Chorowski, Lukasz Kaiser, and Geoffrey E. Hinton. 2017. Regularizing neural networks by penalizing confident output distributions. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Workshop Track Proceedings. + +Lutz Prechelt. 2012. Early stopping - but when? In Neural Networks: Tricks of the Trade. + +Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715-1725, Berlin, Germany. Association for Computational Linguistics. + +Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2818-2826. + +Jiaxi Tang, Rakesh Shivanna, Zhe Zhao, Dong Lin, Anima Singh, Ed H. Chi, and Sagar Jain. 2021. Understanding and improving knowledge distillation. + +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc. + +Li Yuan, Francis EH Tay, Guilin Li, Tao Wang, and Jiashi Feng. 2020. Revisiting knowledge distillation via label smoothing regularization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3903-3911. + +Sukmin Yun, Jongjin Park, Kimin Lee, and Jinwoo Shin. 2020. Regularizing class-wise predictions via self-knowledge distillation. In The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). + +Linfeng Zhang, Jiebo Song, Anni Gao, Jingwei Chen, Chenglong Bao, and Kaisheng Ma. 2019. Be your own teacher: Improve the performance of convolutional neural networks via self distillation. + +Zhilu Zhang and Mert R. Sabuncu. 2020. Self-distillation as instance-specific label smoothing. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. + +# A Gradient Rescaling by $\alpha$ + +Proposition 1. Given any two samples $(\mathbf{x}^{(i)},\mathbf{y}^{(i)}),$ $(\mathbf{x}^{(k)},\mathbf{y}^{(k)})\in \mathcal{X}\times \mathcal{Y}$ and $P_{\theta}(y_j^{(i)}|\mathbf{x}^{(i)}) =$ $P_{\theta}(y_j^{(k)}|\mathbf{x}^{(k)})$ , the average gradient rescaling factor w for all classes is greater on sample with high probability entropy than that of the one with low probability entropy. + +This proposition is built on the basis of Proposition 2 in (Tang et al., 2021), where the work discusses the gradient rescaling effect on logit space by KD. However, there are a few differences, of which the first is the assumption made. In (Tang et al., 2021), the paper assumes that a teacher makes more confident prediction than a student. This is in opposition to our setting, where a teacher makes less confident prediction than a student; this assumption is valid as a teacher is set to be the previous checkpoint of a student. In addition, our work further extends the proposition in terms of $\alpha$ . The proposition only holds as the proposed work employs instance-specific $\alpha$ . + +Proof. We rewrite the Equation 9 for readers' better understanding/comprehension. + +$$ +w _ {i} = \frac {\partial \mathcal {L} _ {k d} / \partial z _ {i}}{\partial \mathcal {L} _ {c e} / \partial z _ {i}} = (1 - \alpha) + \alpha \frac {P _ {\theta} (y _ {i}) - P _ {\phi} (y _ {i})}{P _ {\theta} (y _ {i}) - y _ {i}} +$$ + +The gradient rescaling factor on target index is as follows: + +$$ +w _ {j} = (1 - \alpha) + \alpha \frac {P _ {\theta} (y _ {i}) - P _ {\phi} (y _ {i})}{P _ {\theta} (y _ {i}) - 1} +$$ + +Now, the gradient rescaling factor on the remaining + +classes is computed as the following. + +$$ +\begin{array}{l} \sum_ {i \neq j} \partial \mathcal {L} _ {k d} / \partial z _ {i} = \sum_ {i \neq j} [ (1 - \alpha) P _ {\theta} (y _ {i}) \\ + \alpha P _ {\theta} (y _ {i}) - P _ {\phi} (y _ {i}) ] \\ = (1 - \alpha) \left(1 - P _ {\theta} \left(y _ {j}\right)\right) \\ + \alpha \left(P _ {\phi} \left(y _ {j}\right) - P _ {\theta} \left(y _ {j}\right)\right) \\ \end{array} +$$ + +$$ +\begin{array}{l} \sum_ {i \neq j} \partial \mathcal {L} _ {c e} / \partial z _ {i} = \sum_ {i \neq j} P _ {\theta} (y _ {i}) \\ = \left(1 - P _ {\theta} \left(y _ {j}\right)\right) \\ \end{array} +$$ + +$$ +\frac {\sum_ {i \neq j} \partial \mathcal {L} _ {k d} / \partial z _ {i}}{\sum_ {i \neq j} \partial \mathcal {L} _ {c e} / \partial z _ {i}} = (1 - \alpha) + \alpha \frac {P _ {\theta} (y _ {i}) - P _ {\phi} (y _ {i})}{P _ {\theta} (y _ {i}) - 1} +$$ + +$$ +w _ {i} = \frac {\partial \mathcal {L} _ {k d} / \partial z _ {i}}{\partial \mathcal {L} _ {c e} / \partial z _ {i}} = \frac {\sum_ {i \neq j} \partial \mathcal {L} _ {k d} / \partial z _ {i}}{\sum_ {i \neq j} \partial \mathcal {L} _ {c e} / \partial z _ {i}} \tag {13} +$$ + +Assuming we are given two samples $(\mathbf{x}^{(i)},\mathbf{y}^{(i)})$ $(\mathbf{x}^{(k)},\mathbf{y}^{(k)})$ and $P_{\theta}(y_j^{(i)}|\mathbf{x}^{(i)}) = P_{\theta}(y_j^{(k)}|\mathbf{x}^{(k)})$ when the conditional entropy of the probability distributions differ such that $H(P_{\theta}(Y|\mathbf{x}^{(i)})) > H(P_{\theta}(Y|\mathbf{x}^{(k)}))$ , the average gradient rescaling factor over all classes is smaller on a sample with low entropy than that with a high entropy. + +$$ +\mathbb {E} w ^ {(i)} > \mathbb {E} w ^ {(k)} \tag {14} +$$ + +$$ +\begin{array}{l} (1 - \alpha^ {(i)}) + \alpha^ {(i)} \frac {P _ {\theta} (y _ {i}) - P _ {\phi} (y _ {i})}{P _ {\theta} (y _ {i}) - 1} > \tag {15} \\ (1 - \alpha^ {(k)}) + \alpha^ {(k)} \frac {P _ {\theta} (y _ {i}) - P _ {\phi} (y _ {i})}{P _ {\theta} (y _ {i}) - 1} \\ \end{array} +$$ + +as $\frac{P_{\theta}(y_i) - P_{\phi}(y_i)}{P_{\theta}(y_i) - 1}$ is a negative value and $\alpha^{(i)} < \alpha^{(k)}$ ; hence the proof. + +# B Baselines + +# B.1 Confidence Penalty + +Confidence penalty (Pereyra et al., 2017) adds a negative entropy term to the loss function, hence the model is encouraged to maintain entropy at certain level. + +$$ +\begin{array}{l} \mathcal {L} _ {c f} = - \sum_ {i = 1} ^ {| C |} y _ {i} ^ {(n)} \log P _ {\theta} \left(y _ {i} \mid \boldsymbol {x} ^ {(n)}\right) \tag {16} \\ - \beta H \left(P _ {\theta} (\boldsymbol {y} | \boldsymbol {x} ^ {(n)})\right) \\ \end{array} +$$ + +For the regularization-specific hyperparameter, following (Meister et al., 2020), $\beta$ was set to 0.78. + +# B.2 TF-KD + +TF-KD (Yuan et al., 2020), similar to conventional knowledge distillation, trains a teacher model prior to training a student; but it is different in that the model architecture is same with that of the student. For the hyperparameters used in this paper, we empirically find that high smoothing parameter leads to better performance. Thus, we set the smoothing parameter to 0.9 and temperature scaling to 20 as reported in the original paper. + +# B.3 SKD-PRT + +SKD-PRT (Kim et al., 2021) is a self-knowledge distillation method, where a student model (epoch $t$ ) is trained with its own last epoch checkpoint (epoch $t - 1$ ) in the course of training. Though the idea is similar to ours, yet there are two core differences. The first is that we find the teacher model that generalizes well with a function $g$ . Another difference is that SKD-PRT linearly increases $\alpha$ throughout the training, and this practice inevitably adds two hyperparameters ( $\max \alpha$ and $\max$ epoch). Following the original work (Kim et al., 2021), we set that maximum $\alpha$ to 0.7 and maximum epoch to 150 in our experiments. + +# B.4 LORAS + +LORAS (Ghoshal et al., 2021) jointly learns a soft target and model parameters in training in the aim of increasing model performance and model calibration, with low rank assumption. For hyperparameters, $\eta$ , $\alpha$ , rank and dropout probability are set to 0.1, 0.2, 25 and 0.5 respectively. + +# B.5 BETA & SD + +Zhang and Sabuncu (2020) propose amortized MAP interpretation of teacher-student training, and introduce Beta smoothing which is an instance-specific smoothing technique that is based on the prediction by a teacher network. For SD-specific hyperparameters, this work sets $\alpha$ to 0.3 and temperature to 4.0. For BETA-specific hyperparameters, $\alpha$ and $a$ are set to 0.4, 4.0 respectively. + +# C Dataset Details + +IWSLT14 DE-EN contains 160K sentence pairs in training, 7K in validation, and 7K in testing. IWSLT15 EN-VI has 133K, 1.5K and 1.3K in training, validation, and testing dataset respectively. Lastly, 28K training, 1K validation, and 1K testing sentences are used in Multi30K dataset. Byte pair + +encoding (Sennrich et al., 2016) is used to process words into sub-word units. + +# D Reproducibility Statement + +For reproducibility, we report the three random seeds tested: $\{0000, 3333, 5555\}$ . For all of the experiments, this work utilizes the transformer architecture (Vaswani et al., 2017). Both the encoder and the decoder are composed of 6 transformer layers with 4 attention heads. The hidden dimension size of the both is 512. For training configuration, the maximum tokens in a batch is set to 4,096. For optimization, Adam (Kingma and Ba, 2015) is used with beta 1 and beta 2 set to 0.9 and 0.98 respectively. We slowly increase the learning rate up to 0.005 throughout the first 4,000 steps, and the learning rate decreases from then on. The source code and training script are included in the supplementary materials. \ No newline at end of file diff --git a/adaptivelabelsmoothingwithselfknowledgeinnaturallanguagegeneration/images.zip b/adaptivelabelsmoothingwithselfknowledgeinnaturallanguagegeneration/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..b8dd324ffefab5b742d05d31d0bf8a5077e5c124 --- /dev/null +++ b/adaptivelabelsmoothingwithselfknowledgeinnaturallanguagegeneration/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:388c73f3f2a5db45d92144b7f38459b554f6bf66b24504063472bfccdcaf1773 +size 546306 diff --git a/adaptivelabelsmoothingwithselfknowledgeinnaturallanguagegeneration/layout.json b/adaptivelabelsmoothingwithselfknowledgeinnaturallanguagegeneration/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..6f7875bf1c24df23f895d43157ae4fdb46f37e16 --- /dev/null +++ b/adaptivelabelsmoothingwithselfknowledgeinnaturallanguagegeneration/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7c1d2a6fb1f630fad1118d33102f5fc41ab8bf54188b7604bdb89fa6fd738089 +size 409407 diff --git a/adaptivetokenlevelcrosslingualfeaturemixingformultilingualneuralmachinetranslation/008432b4-3758-4b9c-af44-9b02d64f9690_content_list.json b/adaptivetokenlevelcrosslingualfeaturemixingformultilingualneuralmachinetranslation/008432b4-3758-4b9c-af44-9b02d64f9690_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..df0dc7c02efedbb88458e6490ac9331dfeeb2533 --- /dev/null +++ b/adaptivetokenlevelcrosslingualfeaturemixingformultilingualneuralmachinetranslation/008432b4-3758-4b9c-af44-9b02d64f9690_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:771f21512f12f86bbccaceb24b693952998bc47f86ae3f0c5f750304f0d26031 +size 119612 diff --git a/adaptivetokenlevelcrosslingualfeaturemixingformultilingualneuralmachinetranslation/008432b4-3758-4b9c-af44-9b02d64f9690_model.json b/adaptivetokenlevelcrosslingualfeaturemixingformultilingualneuralmachinetranslation/008432b4-3758-4b9c-af44-9b02d64f9690_model.json new file mode 100644 index 0000000000000000000000000000000000000000..cc533c7b40d51ea22b5be43d6377a27c61e8691a --- /dev/null +++ b/adaptivetokenlevelcrosslingualfeaturemixingformultilingualneuralmachinetranslation/008432b4-3758-4b9c-af44-9b02d64f9690_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c0ad054b9c700d52146e1a37cf370156098a7522b89ab73d73e24a01f3e06feb +size 142125 diff --git a/adaptivetokenlevelcrosslingualfeaturemixingformultilingualneuralmachinetranslation/008432b4-3758-4b9c-af44-9b02d64f9690_origin.pdf b/adaptivetokenlevelcrosslingualfeaturemixingformultilingualneuralmachinetranslation/008432b4-3758-4b9c-af44-9b02d64f9690_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..0597c6b0a81e293b939b0b1962ea2225086e2374 --- /dev/null +++ b/adaptivetokenlevelcrosslingualfeaturemixingformultilingualneuralmachinetranslation/008432b4-3758-4b9c-af44-9b02d64f9690_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2b0bb4a83293ec2ed0ab83d62e42396479789062fee71579a4be56e6ca31b635 +size 2281411 diff --git a/adaptivetokenlevelcrosslingualfeaturemixingformultilingualneuralmachinetranslation/full.md b/adaptivetokenlevelcrosslingualfeaturemixingformultilingualneuralmachinetranslation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..55356ff103b981d0d5164d53a219a49727db535b --- /dev/null +++ b/adaptivetokenlevelcrosslingualfeaturemixingformultilingualneuralmachinetranslation/full.md @@ -0,0 +1,410 @@ +# Adaptive Token-level Cross-lingual Feature Mixing for Multilingual Neural Machine Translation + +Junpeng Liu $^{1}$ Kaiyu Huang $^{2}$ Jiuyi Li $^{1}$ Huan Liu $^{1}$ Jinsong Su $^{3}$ Degen Huang $^{1*}$ + +$^{1}$ Dalian University of Technology + +$^{2}$ Institute for AI Industry Research, Tsinghua University $^{3}$ Xiamen University + +{liujunpeng_nlp, lee.91, liuhuan4221}@mail.dlut.edu.cn + +huangkaiyu@air.tsinghua.edu.cn + +jssu@xmu.edu.cn huangdg@dlut.edu.cn + +# Abstract + +Multilingual neural machine translation aims to translate multiple language pairs in a single model and has shown great success thanks to the knowledge transfer across languages with the shared parameters. Despite promising, this share-all paradigm suffers from insufficient ability to capture language-specific features. Currently, the common practice is to insert or search language-specific networks to balance the shared and specific features. However, those two types of features are not sufficient enough to model the complex commonality and divergence across languages, such as the locally shared features among similar languages, which leads to sub-optimal transfer, especially in massively multilingual translation. In this paper, we propose a novel token-level feature mixing method that enables the model to capture different features and dynamically determine the feature sharing across languages. Based on the observation that the tokens in the multilingual model are usually shared by different languages, we insert a feature mixing layer into each Transformer sublayer and model each token representation as a mix of different features, with a proportion indicating its feature preference. In this way, we can perform fine-grained feature sharing and achieve better multilingual transfer. Experimental results on multilingual datasets show that our method outperforms various strong baselines and can be extended to zero-shot translation. Further analyses reveal that our method can capture different linguistic features and bridge the representation gap across languages. $^{1}$ + +# 1 Introduction + +Multilingual neural machine translation (MNMT) (Ha et al., 2016; Johnson et al., 2017) handles several translation directions in a single model. These + +multilingual models have been shown to be capable of facilitating the knowledge transfer across different languages (Lakew et al., 2018; Tan et al., 2019; Zhang et al., 2020) and enabling translations between language pairs unseen in training (Johnson et al., 2017; Al-Shedivat and Parikh, 2019; Gu et al., 2019; Zhang et al., 2020). Due to the above advantages, MNMT is appealing and has drawn much attention in recent years. + +The success of MNMT comes at the cost of insufficient ability to capture language-specific features (Zhang et al., 2021). Since the model parameters are shared across languages, the MNMT model tends to preserve the shared features but ignore the language-specific ones. Therefore, researchers resort to language-specific modeling to capture and balance those two types of features. Some works attempt to insert additional language-specific modules into the original MNMT model (Wang et al., 2019; Bapna and First, 2019; Zhang et al., 2020, 2021). However, those methods are sensitive to the structure and location of language-specific modules and require specialized manual design. To avoid this problem, other works turn to search language-specific networks in the MNMT model (Lin et al., 2021; Xie et al., 2021). Those methods generally adopt the multi-stage training strategy to find and fine-tune the language-specific parameters, which increases the training complexity, especially in massively multilingual translation settings. + +Another pitfall of the above methods is that dividing the features into shared and language-specific ones may not be sufficient to model the complicated commonality and divergence across languages. Previous studies (Tan et al., 2019; Oncevay et al., 2020) have shown that similar languages generally share more commonality, and clustering them together can boost their translation performance. Moreover, Lin et al. (2021) also demonstrates that there are some overlaps between the language-specific networks of similar languages. These observations + +indicate that there are some locally shared features among similar languages which are important to the multilingual transfer. However, those features are not effectively used in the current language-specific models, which motivates us to model more fine-grained features of different languages to facilitate the multilingual transfer. + +In this work, we propose a novel token-level cross-lingual feature mixing method that enables the model to adaptively determine the feature sharing during training. Based on the observation that the tokens in multilingual vocabulary are usually shared by different languages, we assume that each token representation contains a mix of lexical and linguistic features, with a feature proportion indicating its feature preference. Specifically, we employ a set of linear transformations to capture different features, on which we perform weighted feature aggregation with the specific feature proportion. By varying the feature proportions, we can retain the locally shared features and control the knowledge sharing across different languages. Our main contributions are summarized as follows: + +- We propose a method that can perform fine-grained feature extraction and aggregation in the MNMT model without explicit shared and specific division, and can dynamically determine the feature sharing across languages with the adaptive feature proportions. +- We study the feature proportions and the representation space learned by our method, and find that our method can implicitly characterize a mix of linguistic features and narrow the representation gap across languages. +- We conduct extensive experiments on several multilingual datasets in different translation scenarios. Experimental results and in-depth analyses show that our method outperforms the language-specific models, especially in massively multilingual translation, and can be easily extended to boost zero-shot translation and alleviate the off-target issue. + +# 2 Related Work + +Our work closely relates to the language-specific modeling in MNMT. Early studies focus on increasing the shared parts of separate bilingual models for better knowledge transfer. These works include + +sharing encoders (Dong et al., 2015), sharing attention layers (Firat et al., 2016) and sharing decoders (Zoph and Knight, 2016). Later, Ha et al. (2016) and Johnson et al. (2017) develop a universal MNMT model with an artificial language token added to the source sentence to indicate the target language. While the share-all paradigm generally captures the commonality of languages but ignores the specific features of each language. To this end, researchers turn to language-specific modeling for better balance between sharing and specific, including redesigning parameter sharing strategies (Blackwood et al., 2018; Sachan and Neubig, 2018; Wang et al., 2019; Vázquez et al., 2019), training separate models for different language clusters (Tan et al., 2019), inserting lightweight adapters (Bapna and Firat, 2019), routing shared or language-specific path (Zhang et al., 2021), dividing general and specific networks or neurons (Lin et al., 2021; Xie et al., 2021) and parameter differentiation (Wang and Zhang, 2021). However, these methods do not make full use of the locally shared features across similar languages, leading to sub-optimal cross-lingual transfer, especially in massively multilingual translation. Instead, we propose a feature mixing method which is a variant of Mixture-of-Experts (MoE) models (Shazeer et al., 2017; Lepikhin et al., 2020). We discuss two gating mechanisms and analyze the impact of the location and sparsity of the MoE layer (CLM module) on multilingual translation performance. + +Our work is also related to zero-shot translation. Some studies resort to forming language-agnostic representations. Arivazhagan et al. (2019a) and Pham et al. (2019) introduce auxiliary training objectives to align the representations of different languages. Pan et al. (2021) bridges the cross-lingual representations with additional dictionary and contrastive learning. Liu et al. (2021) disentangles the positional information by relaxing the structural constraint. Other studies explore to enhance the language-specific features in translation. Wang et al. (2019) and Yang et al. (2021) employ an additional target language prediction task to train the model to distinguish different languages. Philip et al. (2020) adopt monolingual adapter to model the language-specific features. Our work continues in these directions, but with a special focus on combining different feature mixing models in the encoder and decoder to build a language-agnostic encoder and language-aware decoder. + +![](images/1b69c2de36811e9abeb236624f19f9a0569efcaef3380e009ea0da19f3454bdb.jpg) +(a) Language-specific model + +![](images/f605dc77e5436fb259502a5f9383ca53d27fbf8281bb490327132d455ee682eb.jpg) +(b) $sCLM$ model +Figure 1: Comparison of language-specific, sCLM and mCLM model. The residual connection and layer normalization are not visualized here for brevity. + +![](images/540fa2ede17df390c5d46c406a5ebb807764a8be6adff1fbe36d91cf24305e84.jpg) +(c) $mCLM$ model + +# 3 Method + +Our main idea is to model the commonality and divergence of different languages in a fine-grained way to retain more shared features, especially those locally shared by similar languages to facilitate the multilingual transfer. To achieve this, each language is considered to contain a mix of different features rather than solely the shared and specific ones, as shown in Figure 1. Specifically, we first project each token representation into different subspaces with a set of linear transformations to capture different features and calculate the corresponding feature proportion based on the token representation itself. Then we take the weighted averaging of different linear transformations as the feature-mixed representation. The proportion indicates the importance of each feature and determines the knowledge sharing across different languages. + +# 3.1 Feature Proportion + +Our proposed method is motivated by the observation that the token (e.g. word or subword) in the multilingual vocabulary usually contains several different lexical and linguistic features. On the one hand, a token shared by different languages naturally embodies different lexical and semantic meanings. On the other hand, a token also contains various contextual and structural information because its representation is essentially learnt from all the tokens in the sentence. Inspired by Jiang et al. (2020), we assume that each token holds a mix of those lexical and linguistic features with a certain proportion indicating its feature preference in different languages. Specifically, given a token representation $x \in \mathbb{R}^d$ and $k$ features, we parameterize the feature proportion $\mathcal{P}(x)$ with a linear transformation followed by a softmax function. We also add a smoothing parameter $\alpha$ to prevent the + +output $\mathcal{P}(x)$ from collapsing towards 0 or 1: + +$$ +\mathcal {P} (x) = (1 - \alpha) \cdot \operatorname {s o f t m a x} (x P) + \alpha / k \quad (1) +$$ + +where $P\in \mathbb{R}^{d\times k}$ is the feature projection weight, $\alpha \in (0,1)$ smooths the probability so as to activate all the features. + +# 3.2 Adaptive Token-level Feature Mixing + +Previous studies (Bapna and First, 2019; Zhang et al., 2020, 2021) employ individual parameters for each language pair to capture the language-specific features. However, those methods are weak in their ability to capture the locally shared features among similar languages. To solve this problem, we take the weighted aggregation of different features based on a specific proportion $\mathcal{P}(x)$ as the language-specific representations. In this way, the feature sharing across different languages can be controlled by varying their feature proportions. Specifically, we consider linear transformations $\{W_j\}_{j=1}^k$ for $k$ features on the $i$ -th input token representation $h_i$ , the weighted aggregation of linear transformations can be written as follows: + +$$ +\tilde {h} _ {i} = \sum_ {j = 1} ^ {k} h _ {i} W _ {j} \cdot \mathcal {P} _ {j} \left(h _ {i}\right) \tag {2} +$$ + +where $W_{j}$ is the linear transformation used to model the $j$ -th feature and $\mathcal{P}_j(h_i)$ denotes the proportion on the $j$ -th feature for representation $h_i$ . In multilingual translation, the token representations in each source input naturally contain the target language information since a target language token is added to the source sentence. This indicates the feature proportions of the same token can also be different when translated into different languages. + +This property makes our method more flexible to capture the specific features in different conditions. + +Our feature mixing method can be seen as a heuristic variation of Mixture-of-Experts (MoE) models (Shazeer et al., 2017; Lepikhin et al., 2020). However, compared to previous MoE models which are the sparse combination of the gating mechanism, we adopt a soft and smoothed gating network to retain all the potential shared features and replace the non-linear experts with linear ones for lower memory cost and fast training speed. + +# 3.3 Cross-lingual Mixing Model + +Based on the token-level feature mixing strategy, we introduce our cross-lingual mixing (CLM) module and its implementation in Transformer. Given the input representation $h$ , CLM calculates the feature proportion $\mathcal{P}(h)$ and the weighted averaging representation $\tilde{h}$ as Equations 1 and 2. To make our CLM module optional and plug-able into any part of the Transformer network, we apply a residual connection followed by layer normalization (LN). The CLM module is finally formulated as follows: + +$$ +z = \operatorname {L N} (h + \tilde {h}) \tag {3} +$$ + +Since the tokens have different representations at each Transformer sublayer, their corresponding feature proportions are also different. To this end, we inject CLM modules into each sublayer and distinguish the feature projection weight $P$ across different Transformer layers. Considering that the token may have various feature proportions in different languages, we propose two variants of CLM model according to the feature projection weight settings: + +sCLM shares a single feature projection weight $P_{s} \in \mathbb{R}^{d \times k}$ across all the language pairs. This strategy may ease the proportion allocation in our method as it is highly input dependent. + +$mCLM$ employs a set of language-specific feature projection weights $\{P_{m}\in \mathbb{R}^{d\times k}\}_{m = 1}^{N}$ for different language pairs. Although this strategy involves more parameters than $sCLM$ , we hope that different proportion weights will make it more flexible in proportion allocation. + +# 4 Experiments + +# 4.1 Datasets + +We evaluate our method in English-to-many and many-to-English translation scenarios. We also extend our method to zero-shot translation based on + +the observations in English-centric translation. For en-xx and xx-en translation, we test our method on the OPUS-100 and WMT benchmarks. For zero-shot translation, we evaluate our method on three datasets: IWSLT-17, Europarl and WMT-5. The detailed data descriptions are listed in Appendix A.1. We apply byte pair encoding (BPE) algorithm (Sennrich et al., 2016) using SentencePiece (Kudo and Richardson, 2018)3 to preprocess multilingual sentences with a joint vocabulary of 64K for OPUS-100/WMT-14 and 32K for IWSLT-17/Europarl/WMT-5. + +# 4.2Baselines + +To make our evaluation convincing, we re-implement the original MNMT model and several previous works for comparison. + +Multilingual (Johnson et al., 2017) The unified model which handles multiple languages in a single encoder-decoder model by adding a special language token to the source sentence. + ++Adapter (Bapna and First, 2019) A set of lightweight adapters are injected into the vanilla MNMT model. The dimension of the projection layer is set to 128 and we train the model from scratch. + ++CLSR (Zhang et al., 2021) This method employs a series of hard binary gates conditioned on token representations to dynamically choose the shared and language-specific paths. + +Deep Transformer (Zhang et al., 2020) This method improves the model capacity by increasing the model depth to build a strong baseline. For fair comparisons, the model depth (for both encoder and decoder) are set to 26 and 8 for OPUS-100 and WMT-14, respectively. + +# 4.3 Training and Evaluation + +We employ Transformer-Base setting (Vaswani et al., 2017) in all our experiments on the open-source Fairseq implementation (Ott et al., 2019). The detailed model settings are in Appendix B. We insert the CLM modules into both encoder and decoder for en-xx translation but decoder only for xx-en translation based on the ablation study in Section 4.4. + +We report the detokenized case-sensitive BLEU offered by SacreBLEU (Post, 2018). Following Zhang et al. (2021), we split the language + +3https://github.com/google/sentencepiece +4https://github.com/pytorch/fairseq +$^{5}$ Signature: BLEU+case.mixed+numrefs.1+smooth.exp+tok.13a+version.1.5.1. + +
ModelModel Sizeen-xxxx-en
LowMedHighAllWRLowMedHighAllWR
Multilingual76.96M26.5425.7220.8924.38-33.4232.8728.9131.73-
+Adapter224.81M+2.64+3.25+2.67+2.8593.62+1.37+2.36+1.60+1.7888.30
+CLSR136.08M+2.35+2.29+1.73+2.1294.68+1.64+1.14+0.98+1.2588.30
Deep Transformer224.09M+3.50+4.58+3.49+3.8696.81+1.51+1.77+3.66+2.3186.17
sCLM◇225.49M+3.56+4.33+3.13+3.6796.81+2.34+2.56+2.56+2.4997.87
mCLM◇224.63M+2.43+3.79+2.82+3.0194.68+2.61+2.14+1.68+2.1492.55
+ +Table 1: Translation quality for en-xx and xx-en on the OPUS-100 dataset. $sCLM^{\diamond}$ and $mCLM^{\diamond}$ represent the best $sCLM$ and $sCLM$ model, respectively. To match Adapter in parameters, the feature number $k$ in $sCLM^{\diamond}$ is 280/560 for en-xx/xx-en translation, while 194/388 in $mCLM^{\diamond}$ . Best results are highlighted in bold. + +
ModelModel Sizeen-xxxx-en
LowMedHighAllWRLowMedHighAllWR
Multilingual76.91M16.3519.0525.0720.32-23.0825.6727.7025.46-
+Adapter97.37M+0.89+1.06+1.12+1.03100.0+0.23+0.66+0.39+0.3976.92
+CLSR93.75M+0.44+0.52+0.64+0.54100.0+0.17+0.56+0.33+0.3292.31
Deep Transformer91.63M+0.63+1.06+1.17+0.79100.0+0.68+0.80+0.17+0.5476.92
sCLM◇95.49M+0.85+0.86+1.01+0.92100.0+1.00+1.11+0.84+0.96100.0
mCLM◇98.09M+0.76+1.00+1.22+1.00100.0+0.58+0.99+0.68+0.71100.0
+ +Table 2: Translation quality for en-xx and xx-en on the WMT-14 dataset. The feature number $k$ in the two CLM models are 35/70 for en-xx/xx-en translation. Best results are highlighted in bold. + +pairs in OPUS-100 and WMT-14 into three groups (Low/Med/High) according to their data size. We report the average BLEU for each group and Win Ratio (WR) indicating the proportion of language pairs on which our method beats the original MNMT model. In zero-shot translation, we also report the off-target rate to measure the accuracy of translating into the right target language. + +# 4.4 Results + +Results on OPUS-100. The results are summarized in Table 1. The comparisons between the multilingual baseline and our method suggest that the two variants of the CLM model can improve translation performance for both en-xx and xx-en directions in most language pairs (up to +3.67 BLEU & 96.81 WR on en-xx and +2.49 BLEU & 97.87 WR on xx-en). Moreover, our $sCLM^{\diamond}$ also yields competitive results to the strong baseline with deeper architecture. Compared to +Adapter, our $sCLM^{\diamond}$ and $mCLM^{\diamond}$ achieve better translation performances and WR scores with similar parameters. The results show that adding an adapter module to capture language-specific features may not be sufficient in massively multilingual settings. Compared with +CLSR, our method also performs better, showing that the feature mixing strategy is + +more efficient than directly modeling and balancing the shared and language-specific features of different language pairs. + +Results on WMT-14. The results are summarized in Table 2. Similar to Table 1, our method exceeds the multilingual baseline in all language pairs and beats the Deep Transformer model, confirming the effectiveness of our method. One noticeable difference is that the improvements on xx-en translation brought by +Adapter and +CLSR are not large. By contrast, our method achieves more remarkable BLEU gains and $100\%$ WR scores. Another difference is that our method does not surpass +Adapter on en-xx directions. We ascribe this to the smaller number of similar language pairs in WMT-14, where the feature mixing may cause interference across languages, leading to performance degradation in some language pairs. + +Ablation Study. To study the efficacy of each component in the CLM module, we evaluate models of different settings on the OPUS-100 dataset. The results are summarized in Table 3 and we make the following observations: + +- When removing the gating mechanism from CLM modules, the language-specific model $LS$ fails to surpass the multilingual baseline in + +
ModelEncDecModel Sizeen-xxxx-en
LowMedHighAllWRLowMedHighAllWR
Multilingual76.96M26.5425.7220.8924.38-33.4232.8728.9131.73-
LS126.25M-1.91+0.02-0.19-0.6937.23-1.46-1.38-0.93-0.9223.70
sCLM126.85M+2.59+2.44+1.92+2.3296.81+0.17+0.27+1.66+0.7075.75
sCLM-E101.90M+0.48+0.99+1.02+0.8384.04+1.79+1.16+1.10+1.3596.81
sCLM-D101.90M+0.63+1.10+1.05+0.9286.17-1.15-0.79+0.71-0.4159.57
mCLM180.56M+2.02+3.22+2.49+2.5894.68+1.53+1.80+1.97+1.7788.30
mCLM-E128.75M+1.33+1.87+1.65+1.6290.43+1.83+1.65+1.28+1.5991.49
mCLM-D128.75M+1.45+1.96+1.63+1.6888.30+0.26+0.68+0.87+0.6078.72
Dedicated153.70M+1.93+2.81+2.17+2.3090.43+1.01+1.45+1.80+1.4285.11
+ +Table 3: Ablation study on OPUS-100 dataset. “√” denotes the corresponding CLM modules are inserted in the encoder or the decoder. “LS”: a language-specific model which removes the gating mechanism from CLM modules and makes the linear transformations $\{W_j\}_{j=1}^k$ language-specific. Specially, we keep the number of features and languages the same. “Dedicated”: the combination of sCLM-E and mCLM-D. Best results are highlighted in bold. + +most language pairs. The performance difference between $LS$ and +Adapter shows that the structure and location of the language-specific modules have a large impact on the translation performance and the gating mechanism is important to mitigate the performance decline. + +- For en-xx translation, the CLM modules are important to both the encoder and the decoder, while for xx-en translation, it tends to bring better performances when the CLM modules are only inserted into the encoder. +- Replacing the shared feature projection weight $P_{s}$ with language-specific ones $P_{m}$ ( $sCLM$ vs. $mCLM$ ) can further enhance the translation quality, especially on xx-en translation. We conjecture that the xx-en translation shares the same target language (English), so it is hard for $sCLM$ to capture the specific characteristics of each language pair with the shared proportion weight, as the feature proportions are similar to each other. By contrast, $mCLM$ employs different projection weights for each language pair, making it more flexible to model the differences across language pairs. The performance of the Dedicated model to some extent proves our conjecture. + +More Comparisons. To further illustrate the superiority of our method, we quantify the trade-off between adapter/CLM capacity and performance gains on the OPUS-100 dataset. The results are + +![](images/48cd9018948f2eecffb65b3894a3ea53ae11a0edb6ef26db4071947cf655db75.jpg) +(a) en-xx translation +Figure 2: Comparisons of Adapter, CLSR, sCLM and mCLM under different model sizes. + +![](images/610a83f92243d8adbc171e10508441b62b5807169ee7c77cc24d12bc83592b31.jpg) +(b) xx-en translation + +depicted in Figure 2. We also plot CLSR in the figure for a comprehensive comparison. $sCLM$ consistently outperforms Adapter and CLSR on both en-xx and xx-en translations under the similar number of parameters. Moreover, $sCLM$ achieves the best results with $20\% - 30\%$ parameter reduction compared with Adapter. While $mCLM$ only shows its superiority on xx-en translation due to the increased parameters. We also compare the decoding speed of each method in Appendix C.1. + +# 5 Analysis + +# 5.1 Feature Proportion Similarity + +In our method, each token representation is encoded by aggregating all the features with a specific proportion. We explore whether CLM learns to allocate those feature proportions according to linguistic characteristics or not. We study the proportion allocation of $sCLM$ for en-xx translation on the OPUS-100 testset. Specifically, we calculate the cosine similarity of different language pairs with their average token-level feature proportions (ATP) in both the encoder and the decoder. For + +
langitruhitr
encdecencdecencdecencdec
1esptukukurnekoja
2ptcamkbgtamrjatk
3fresskbeuggumleo
4glgldemktgcsbset
5cafrbgkybnsipluz
+ +Table 4: Languages with top-5 similar ATP vector. + +instance, given the testset of language pair $l$ , $\mathcal{D}_l$ , the ATP in the encoder is formulated as follows: + +$$ +\mathrm {A T P} _ {e} ^ {l} = \frac {\sum_ {X \in \mathcal {D} _ {l}} \mathcal {P} _ {e}}{\sum_ {X \in \mathcal {D} _ {l}} | X | | \mathcal {N} _ {e n c} |} \tag {4} +$$ + +where $|X|$ is the length of the input sentence $X$ , $\mathcal{N}_{enc}$ represents the set of all the CLM modules in the encoder, and $\mathcal{P}_e$ denotes the total feature proportion of all the tokens in sentence $X$ , which is given by $\mathcal{P}_e = \sum_{x \in X} \sum_{m \in \mathcal{N}_{enc}} \mathcal{P}_m(x)$ . For each language pair $l$ , we select the languages with the top-5 cosine similarity. Results for several languages are presented in Table 4 (see Appendix C.2 for full results) and we have two major findings: + +- $sCLM$ captures the relationship in the language branch well. As shown in Table 4, for languages from branches such as Romance (It) and Slavic (Ru), their most similar languages generally come from the same language branch. These results show that $sCLM$ can implicitly capture not only the similarities between languages but also the differences among language branches despite they all belong to the Indo-European family. Moreover, languages from the same branch differ in their similar languages, suggesting that $sCLM$ can characterize the specific features of languages by varying their feature proportions. +- $sCLM$ can also capture the word order divergence. The dominant word order for most languages in our experiments is SVO, while for languages such as from Indic (Hi) or Turkic (Tr) branch, SOV is usually the dominant type. As shown in Table 4, $sCLM$ selects those of the same word order (SOV) as their most similar languages despite they belong to different language families or even do not share the same scripts. For example, the most similar language for Tr (Turkish) in the encoder and decoder are Ko (Korean) and Ja + +(Japanese), respectively. Another explanation for this result is that those three languages are all exclusively concatenative languages. + +In addition to the above findings, we also observe that $sCLM$ can capture regional and cultural influences. For example, Ms (Malay), Id (Indonesian) and Vi (Vietnamese) share more similarities because they are close to each other in geographical location. Zh (Chinese) and Ja (Japanese) are more similar in the decoder due to cultural influences. These observations show that $sCLM$ can characterize complex relationships across languages and fuse those information together well. + +# 5.2 Representation Analyses + +To interpret the superiority of our method over baselines, we delve into the encoder representations incurred by models on xx-en translations. We first employ the accuracy of similarity search tasks as a quantitative indicator of cross-lingual representation alignment following Pan et al. (2021), and then we visualize some sentence representations for further study and comparison. + +# 5.2.1 Similarity Search + +The data computing representations come from TED (Qi et al., 2018) and Flores (Goyal et al., 2021) as they provide multi-way translations in which sentences from each language are semantically equivalent to each other. For TED, we construct a multi-way parallel testset of 2296 samples covering 15 languages. For Flores, we select the first 100 sentences from each language resulting in a multi-way testset of 75 languages. The detailed descriptions of the two testsets are presented in Appendix A.2. + +We conduct experiments in both English-Centric and Zero-Shot scenarios, and report the average top-1 accuracy of sentence similarity research on each dataset. The sentence representations are calculated by averaging the encoder outputs. The results are listed in Table 5. + +English-Centric: Since English has never been seen by the encoder for xx-en translation, there is no available projection weight for $mCLM$ to encode English sentences. Therefore, we only show the results of $sCLM$ in this scenario. Our $sCLM$ achieves notable accuracy improvements on both TED and Flores testset, suggesting that $sCLM$ generalizes well to English with the shared projection weight and narrows the representation gap between + +![](images/4fcbba88e0670082823e5cd4a86100aa8cc5b33216e110b692d078ff5fed6825.jpg) +(a) Multilingual baseline + +![](images/057229205c403b87d488c7118afeab69196aff5e13fadc6254307103c652dbff.jpg) +(b) $sCLM$ +Figure 3: t-SNE visualizations of the encoder representations of 14 low-resource languages on xx-en translation encoded by Multilingual baseline, sCLM and mCLM. + +![](images/0b28f21ff80f3e72606050893e141398e578f5729a8fa07568c1f092b93a9a52.jpg) +(c) $mCLM$ + +
ModelEnglish-CentricZero-Shot
TEDFloresTEDFlores
Multilingual20.5%39.1%80.5%74.8%
sCLM36.4%58.3%84.8%80.0%
mCLM--84.2%75.1%
+ +Table 5: The averaged sentence similarity search top-1 accuracy on TED and Flores testsets in English-Centric and Zero-Shot scenarios. + +English sentences and their semantic equivalents in other languages. + +Zero-Shot: The overall accuracy follows the rule that Multilingual $< mCLM < sCLM$ , showing that the two proposed models can boost the cross-lingual representation alignment. One noticeable observation is that the improvements of $mCLM$ on Flores are not as large as those on TED. We further visualize the sentence representations to explain this point and study the differences between the two proposed models. + +# 5.2.2 Visualization and Comparison + +To further study representation space learned by our $sCLM$ and $mCLM$ , we visualize the encoder representations on xx-en translation by reducing the 512-dim representations to 2-dim with t-SNE (Van der Maaten and Hinton, 2008). We use Flores devtest dataset for visualization as it covers languages of different data sizes. For clarity, we split the 74 non-English languages into three groups (Low/Med/High). We also visualize the representations of the multilingual baseline for comparison. The visualizations on low-resource languages are depicted in Figure 3 and the results on med-and high-resource languages are presented in Appendix C.3. We make the following observations: + +- For the baseline model, most sentences from high-resource languages are clustered to their semantic equivalents in other languages while med-resource especially low-resource languages possess their own distinct clusters. +- For $sCLM$ , sentences from low- and med-resource languages start to be assigned to their semantic clusters and the clustering results on high-resource languages are better than the multilingual baseline. +- For mCLM, it strengthens the trend that sentences from low-resource languages incline to form their individual clusters, despite the better clustering results in high-resource languages. These observations may explain the improvement gaps between TED and Flores (3.7% vs. 0.3%) in Zero-Shot scenario in Table 5 since all the languages in TED are high-resource. + +These observations show the differences between our $sCLM$ and $mCLM$ models. $sCLM$ improves the translations in the sense that it bridges the representation gap across languages while $mCLM$ maps the representations of different languages into distinct subspaces, especially for low-resource languages. We argue that the representations learned by $sCLM$ are more appealing as it clusters sentences based on their semantic similarities. Compared to high-resource languages, the representations in low- and med-resource languages are still not clustered well which need further research. + +# 5.3 Extension to Zero-shot Translation + +Recent studies (Arivazhagan et al., 2019a; Liu et al., 2021) show that zero-shot translation can + +
DatasetPivotMultilingual+sCLM-E+mCLM-D
IWSLT-1719.8015.28 (7.23)17.68 (5.48)18.77 (2.46)
Europarl multiway24.0120.76 (0.78)22.79 (0.51)22.94 (0.50)
Europral w/o overlap26.8423.51 (0.67)25.64 (0.52)25.68 (0.46)
Europarl full28.7627.32 (0.51)28.17 (0.49)28.10 (0.49)
WMT-514.705.41 (51.0)6.12 (48.4)9.17 (25.0)
+ +Table 6: Translation results on zero-shot directions. The average off-target rates $(\%)$ calculated by off-the-shelf LangID model from FastText (Joulin et al., 2016) are reported in brackets. + +be boosted by facilitating the encoder to learn language-agnostic representations. Based on the observations in Section 5.2, we apply the CLM models to zero-shot translation. Specifically, we insert $sCLM$ into the encoder to encourage the language-independent representations. Moreover, we also use $mCLM$ to enhance the ability to distinguish different target languages in the decoder. In Table 6, our method substantially improves zero-shot translation quality and reduces the off-target translations even in the very challenging case of WMT-5, where languages are from different language branches and do not share scripts. In addition, our method also shows competitive results to the pivot models via English. These results demonstrate the strong transfer ability of our method. + +# 5.4 About Sparsity + +To verify whether all the features are essential to the representations, we study the sparsity by selecting the top- $w$ important features for each token representation and pruning others. The performance of $sCLM$ with different $w$ are plotted in Figure 4. The performance on en-xx translation remarkably degrades only when $w < 14$ , suggesting that some features are not important to the translation quality and can be pruned. Similar results can also be observed on xx-en translation. However, the degradation comes earlier ( $w < 54$ ) than en-xx translation, showing that $sCLM$ is more sensitive to the sparsity on xx-en translation. + +# 6 Conclusion + +In this paper, we propose a token-level cross-lingual feature mixing method that can capture different features and dynamically determine the feature sharing across languages. We employ a set of linear transformations to capture different features and aggregate them with specific proportions for each token representation. In this way, we can perform fine-grained feature sharing and + +![](images/5751cab3717d856683a81eaa010509278416b6765e43d67cfb8b3d01a1dd912e.jpg) +Figure 4: $\Delta$ BLEU score along with the increase of $w$ in en-xx and xx-en translation on OPUS-100 dataset. + +achieve better multilingual transfer. Experimental results on multilingual datasets show that our method outperforms various strong baselines and can be extended to zero-shot translation. Further analyses reveal that our method can capture several different linguistic features and bridge the representation gap across languages. In future work, we plan to further study how to narrow the representation gap across low-resource languages for better translation performance and knowledge transfer. + +# Limitations + +Despite effective, our method has the following limitations. An obvious limitation is that we employ additional parameters to model different features to ease the implementation of our method in massively multilingual translation. However, it increases the training cost and slows down the decoding speed. Another limitation is that although our method can bridge the representation gap across languages, the sentence representations in low-resource language still incline to possess their distinct clusters. In the future, we plan to improve the representation space of low-resource languages in the multilingual translation. + +# Acknowledgements + +We sincerely thank all the anonymous reviewers for their insightful comments and suggestions to improve the paper. This work was supported by the National Key Research and Development Program of China (2020AAA0108004) and the National Natural Science Foundation of China (No.U1936109). + +# References + +Maruan Al-Shedivat and Ankur Parikh. 2019. Consistency by agreement in zero-shot neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages + +1184-1197, Minneapolis, Minnesota. Association for Computational Linguistics. +Naveen Arivazhagan, Ankur Bapna, Orhan First, Roee Aharoni, Melvin Johnson, and Wolfgang Macherey. 2019a. The missing ingredient in zero-shot neural machine translation. CoRR, abs/1903.07091. +Naveen Arivazhagan, Ankur Bapna, Orhan Firat, Dmitry Lepikhin, Melvin Johnson, Maxim Krikun, Mia Xu Chen, Yuan Cao, George Foster, Colin Cherry, et al. 2019b. Massively multilingual neural machine translation in the wild: Findings and challenges. arXiv preprint arXiv:1907.05019. +Ankur Bapna and Orhan Firat. 2019. Simple, scalable adaptation for neural machine translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1538-1548, Hong Kong, China. Association for Computational Linguistics. +Graeme Blackwood, Miguel Ballesteros, and Todd Ward. 2018. Multilingual neural machine translation with task-specific attention. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3112-3122, Santa Fe, New Mexico, USA. Association for Computational Linguistics. +Daxiang Dong, Hua Wu, Wei He, Dianhai Yu, and Haifeng Wang. 2015. Multi-task learning for multiple language translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1723-1732, Beijing, China. Association for Computational Linguistics. +Matthew S. Dryer and Martin Haspelmath, editors. 2013. WALS Online. Max Planck Institute for Evolutionary Anthropology, Leipzig. +Orhan First, Kyunghyun Cho, and Yoshua Bengio. 2016. Multi-way, multilingual neural machine translation with a shared attention mechanism. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 866-875, San Diego, California. Association for Computational Linguistics. +Naman Goyal, Cynthia Gao, Vishrav Chaudhary, Peng-Jen Chen, Guillaume Wenzek, Da Ju, Sanjana Krishnan, Marc'Aurelio Ranzato, Francisco Guzmán, and Angela Fan. 2021. The FLORES-101 evaluation benchmark for low-resource and multilingual machine translation. CoRR, abs/2106.03193. +Jiatao Gu, Yong Wang, Kyunghyun Cho, and Victor O.K. Li. 2019. Improved zero-shot neural machine translation via ignoring spurious correlations. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1258-1268, Florence, Italy. Association for Computational Linguistics. + +Thanh-Le Ha, Jan Niehues, and Alex Waibel. 2016. Toward multilingual neural machine translation with universal encoder and decoder. In Proceedings of IWSLT 2016. +Haoming Jiang, Chen Liang, Chong Wang, and Tuo Zhao. 2020. Multi-domain neural machine translation with word-level adaptive layer-wise domain mixing. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1823-1834, Online. Association for Computational Linguistics. +Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Viégas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google's multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5:339-351. +Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Herve Jégou, and Tomas Mikolov. 2016. Fasttext.zip: Compressing text classification models. arXiv preprint arXiv:1612.03651. +Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. +Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tokenizer and tokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71, Brussels, Belgium. Association for Computational Linguistics. +Surafel Melaku Lakew, Mauro Cettolo, and Marcello Federico. 2018. A comparison of transformer and recurrent neural networks on multilingual neural machine translation. In Proceedings of the 27th International Conference on Computational Linguistics, pages 641-652, Santa Fe, New Mexico, USA. Association for Computational Linguistics. +Dmitry Lepikhin, HyoukJoong Lee, Yuanzhong Xu, Dehao Chen, Orhan First, Yanping Huang, Maxim Krikun, Noam Shazeer, and Zhifeng Chen. 2020. Gshard: Scaling giant models with conditional computation and automatic sharding. arXiv preprint arXiv:2006.16668. +Zehui Lin, Liwei Wu, Mingxuan Wang, and Lei Li. 2021. Learning language specific sub-network for multilingual machine translation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 293-305, Online. Association for Computational Linguistics. +Danni Liu, Jan Niehues, James Cross, Francisco Guzmán, and Xian Li. 2021. Improving zero-shot translation by disentangling positional information. + +In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1259-1273, Online. Association for Computational Linguistics. +Arturo Oncevay, Barry Haddow, and Alexandra Birch. 2020. Bridging linguistic typology and multilingual machine translation with multi-view language representations. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2391-2406, Online. Association for Computational Linguistics. +Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. *fairoseq: A fast, extensible toolkit for sequence modeling.* In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics* (Demonstrations), pages 48-53, Minneapolis, Minnesota. Association for Computational Linguistics. +Xiao Pan, Mingxuan Wang, Liwei Wu, and Lei Li. 2021. Contrastive learning for many-to-many multilingual neural machine translation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 244-258, Online. Association for Computational Linguistics. +Ngoc-Quan Pham, Jan Niehues, Thanh-Le Ha, and Alexander Waibel. 2019. Improving zero-shot translation with language-independent constraints. In Proceedings of the Fourth Conference on Machine Translation (Volume 1: Research Papers), pages 13-23, Florence, Italy. Association for Computational Linguistics. +Jerin Philip, Alexandre Berard, Matthias Galle, and Laurent Besacier. 2020. Monolingual adapters for zero-shot neural machine translation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4465-4470, Online. Association for Computational Linguistics. +Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186-191, Brussels, Belgium. Association for Computational Linguistics. +Ye Qi, Devendra Sachan, Matthieu Felix, Sarguna Padmanabhan, and Graham Neubig. 2018. When and why are pre-trained word embeddings useful for neural machine translation? In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 529-535, New Orleans, Louisiana. Association for Computational Linguistics. + +Devendra Sachan and Graham Neubig. 2018. Parameter sharing methods for multilingual self-attentional translation models. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 261–271, Brussels, Belgium. Association for Computational Linguistics. +Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715-1725, Berlin, Germany. Association for Computational Linguistics. +Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. 2017. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538. +Xu Tan, Jiale Chen, Di He, Yingce Xia, Tao Qin, and Tie-Yan Liu. 2019. Multilingual neural machine translation with language clustering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 963-973, Hong Kong, China. Association for Computational Linguistics. +Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of machine learning research, 9(11). +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pages 6000-6010. +Raul Vázquez, Alessandro Raganato, Jörg Tiedemann, and Mathias Creutz. 2019. Multilingual NMT with a language-independent attention bridge. In Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019), pages 33-39, Florence, Italy. Association for Computational Linguistics. +Qian Wang and Jiajun Zhang. 2021. Parameter differentiation based multilingual neural machine translation. arXiv preprint arXiv:2112.13619. +Yining Wang, Long Zhou, Jiajun Zhang, Feifei Zhai, Jingfang Xu, and Chengqing Zong. 2019. A compact and language-sensitive multilingual translation method. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1213-1223, Florence, Italy. Association for Computational Linguistics. +Wanying Xie, Yang Feng, Shuhao Gu, and Dong Yu. 2021. Importance-based neuron allocation for multilingual neural machine translation. In Proceedings of the 59th Annual Meeting of the Association for + +Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5725-5737, Online. Association for Computational Linguistics. +Yilin Yang, Akiko Eriguchi, Alexandre Muzio, Prasad Tadepalli, Stefan Lee, and Hany Hassan. 2021. Improving multilingual translation by representation and gradient regularization. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7266-7279, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. +Biao Zhang, Ankur Bapna, Rico Sennrich, and Orhan First. 2021. Share or not? learning to schedule language-specific capacity for multilingual translation. In International Conference on Learning Representations 2021. +Biao Zhang, Philip Williams, Ivan Titov, and Rico Senrich. 2020. Improving massively multilingual neural machine translation and zero-shot translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1628-1639, Online. Association for Computational Linguistics. +Barret Zoph and Kevin Knight. 2016. Multi-source neural translation. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 30-34, San Diego, California. Association for Computational Linguistics. + +# A Dateset Details + +# A.1 Training Data + +We perform en-xx and xx-en translations on the OPUS-100 and WMT-14 benchmarks, and zero-shot translations are evaluated on IWSLT-17, Europarl and WMT-5 datasets. We give detailed descriptions of these dataset used in this work. + +OPUS-100. We collect 94 language pairs from (Zhang et al., 2020)'s release7 by discarding those without valid/test sets. We use the official valid/test sets for evaluation. + +WMT-14. We use the same training-valid/test sets as Zhang et al. (2021) except that we limit the training sentence pairs in each direction to 10M by random sampling. + +IWSLT-17. We select 3 language pairs (En ↔ {It, Nl, Ro}) from the official dataset8, and perform 6 zero-shot translations between the 3 non-English langauges. The datasets are described in Table 7. + +Europarl. We use the training-valid/test datasets released by (Liu et al., 2021) and conduct experiments under three conditions following Liu et al. (2021). + +WMT-5. We collect 4 language pairs from WMT-14: En-De (4.5M), En-Hi (0.3M), En-Ru (10M) and En-Zh (10M). We study this challenging case where the training data is imbalanced and the languages involved in zero-shot directions are different in scripts. We evaluate the zero-shot performance on the Flores devtest which contains 1012 sentences in each direction. + +# A.2 Evaluation Data + +We employ the TED testset and Flores devtest for representation analysis in Section 5.2, and we give more detailed descriptions. + +TED. We construct a multi-way parallel testset of 2296 samples covering 15 languages including Arabic, Czech, German, English, Spanish, French, Italian, Japanese, Korean, Dutch, Romanian, Russian, Turkish, Vietnamese and Chinese. Note that the languages in TED are all high-resource in the OPUS-100 dataset. + +
Language Pairtrainvalidtest
En-It231.6K9291566
En-NI237.2K10031777
En-Ro220.5K9141678
It-Ro217.5K9141643
NI-Ro206.9K9131680
It-NI233.4K10011669
+ +Table 7: Statistics of IWSLT-17 dataset. + +
Language
Lowam, be, ha, ig, kk, kn, ky, mr, my, oc, or, ps, te, zu
Medaf, as, az, cy, ga, gl, gu, hi, ka, km, ku, ml, ne, pa, ta, tg, ur, uz, xh
Highar, bg, bn, bs, ca, cs, da, de, el, es, et, fa, fi, fr, he, hr, hu, id, is, it, ja, ko, lt, mk, ms, mt, nl, no, pl, pt, ro, ru, sk, sl, sr, sv, th, tr, uk, vi, zh
+ +Table 8: Languages in Flores devtest set used for similarity search. + +Flores. For Flores, we select the first 100 sentences from the devtest for each language resulting in a multi-way testset of 75 languages. We split the languages into three groups (Low/Med/High) according to their data size in the OPUS-100 dataset. The detailed statistics are listed in Table 8. + +# B Implementation Details + +For fair comparison, we employ Transformer base in all our experiments, which consists 6 stacked encoder/decoder layers and 8 attention heads, with the model size $d_{\mathrm{model}}$ of 512 and feed-forward dimension $d_{\mathrm{ffn}}$ of 2048. + +For model training, we use the temperature-based sampling strategy to balance the training data distribution with a temperature of $T = 5$ (Arivazha-gan et al., 2019b), and set share-all-embeddings in Fairseq to save parameters. All the model parameters are optimized using Adam optimizer (Kingma and Ba, 2014) ( $\beta_{1} = 0.9$ , $\beta_{2} = 0.98$ ) with label smoothing of 0.1. The learning rate is scheduled as Vaswani et al. (2017) with a warm-up step of 4000 and a peak learning rate of 0.0005. The dropout rate is set to 0.1 and the smoothing parameter $\alpha$ in Equation 1 is set to 0.05. We train all models with a batch of 4096 and set update_freq in Fairseq to 4. The training sequence length is limited to 100 and all the MNMT models are trained for 120K steps on 4 Nvidia RTX A6000 GPUs. We add a target language token $l$ to the source sentence to indicate the language to translate into following + +
ModelModel Sizeen-xxxx-enDecoding Speed (tokens/s)
AllWRAllWR
Multilingual76.96M24.38-31.73-1873
+Adapter224.81M+2.8593.62+1.7888.301726
+CLSR136.08M+2.1294.68+1.2588.301380
sCLM-top179.87M+2.9196.81+1.6294.681564
mCLM-top224.61M+3.1395.74+1.8391.491590
sCLM179.89M+3.1995.74+1.9197.871143
mCLM224.63M+3.0194.68+2.1492.551240
+ +Table 9: Comparisons of translation quality and decoding speed on the OPUS-100 training data. The bottleneck dimension in Adapter is set to 128. The feature number $k$ is set to 194 in sCLM models for both en-xx and xx-en translation, while $k$ is set to 134/154 in mCLM models for en-xx and xx-en translation, respectively. + +Johnson et al. (2017). However, the language token $l$ is altered to denote the source language in our experiments when performing xx-en translation following Zhang et al. (2021). + +We average the last 5 checkpoints for evaluation. We perform beam search decoding with beam size of 4 and length penalty of 1.0. + +# C More Results + +# C.1 Comparisons on Performance and Speed + +We compare the translation performance and decoding speed of our methods with all the baselines. For fair comparisons, we build another CLM variant (CLM-top) in which the CLM modules are only introduced in each feed-forward sublayer similar to Adapter. The results are listed in Table 9. We give two major findings: + +- Compared with the original CLM models, the CLM-top models suffer from slight degradation in most cases, showing that it is better to introduce CLM modules in all the sublayers. Despite that, the CLM-top models can achieve similar or better performance compared with Adapter and CLSR. These results further show the effectiveness of our method. +- The decoding speed is related to both the amount of the CLM modules in Transformer and the number of features in each CLM module. Compared with Adapter, all the CLM models slow down the decoding speed due to the token-level feature mixing. + +# C.2 Detailed Results on Feature Proportion Similarity + +We show the top-5 similar languages for each language based on their feature proportion similarity. + +The results in the encoder and the decoder are listed in Tables 10 and 11, respectively. + +# C.3 Visualization of Sentence Representations + +The visualizations on med- and high-resource languages are depicted in Figures 5 and 6, respectively. + +
CodeLanguageGenusFamilySimilar LanguagesCodeLanguageGenusFamilySimilar Languages
afAfrikaansGermanicIndo-Europeanfy nl de nn nbsqAlbanianAlbanianIndo-Europeanit es pl ro pt
daDanishGermanicIndo-Europeansv nb no nl nnbrBretonCelticIndo-Europeanas cy bn pl it
deGermanGermanicIndo-Europeannl ru da fr nbcyWelshCelticIndo-Europeanfy km nn kk as
fyWestern FrisianGermanicIndo-Europeanaf nn pa ne ligaIrishCelticIndo-Europeanfr ru gd sh mt
isIcelandicGermanicIndo-Europeanno sv da nl bsgdGaelicCelticIndo-Europeanga km af or nn
liLimburganGermanicIndo-Europeanfy tk yi ku kyelGreekGreekIndo-Europeansi cs pl mk sk
nlDutchGermanicIndo-Europeande sv da no rujaJapaneseJapaneseJapaneseko ml bn si th
noNorwegianGermanicIndo-Europeansv da is nb nlkoKoreanKoreanKoreanja ml th si bn
nbNorwegian BokmålGermanicIndo-Europeanda nn sv no derwKinyarwandaBantoidNiger-Congobe fy oc ne km
nnNorwegian NynorskGermanicIndo-Europeannb da sv fy noxhXhosaBantoidNiger-Congozu et ru es ku
svSwedishGermanicIndo-Europeanda no nb is nlzuZuluBantoidNiger-Congoxh fy kk wa ne
yiYiddishGermanicindo-Europeanli fy as ne kyigIgboIgboidNiger-Congocy fy li km ky
asAssameseIndicIndo-Europeanne or gu pa bnazAzerbaijaniTurkicAltaicug tt ur uz am
bnBengaliIndicIndo-Europeanml ko hi ja askkKazakhTurkicAltaicky be or ne fy
guGujaratiIndicIndo-Europeanne pa or as kmkyKyrgyzzTurkicAltaicbe kk nn fy ne
hiHindiIndicIndo-Europeanur ta ug tg bntkTurkmenTurkicAltaicli ku fy ky ps
mrMarathiIndicIndo-Europeanor bn hi ml uktrTurkishTurkicAltaicko ja ml bs pl
neNepaliIndicIndo-Europeangu pa as or fyttTatarTurkicAltaicaz ug uz ur tg
orOriyaIndicIndo-Europeanpa gu as ne knugUyghurTurkicAltaicaz ur tt hi uz
paPanjabiIndicIndo-Europeanne gu as fyuzUzbekTurkicAltaictt ug az ur tg
siSinhalaIndicIndo-Europeanml el ko ja bnamAmharicSemiticAfro-Asiaticaz tg ur ug hi
urUrduIndicIndo-Europeanhi tg ug az taarArabicSemiticAfro-Asiaticaf ru es it pt
faPersianIranianIndo-Europeanko vi uk ml hiheHebrewSemiticAfro-Asiatichr pl bs uk sr
kuKurdishIranianIndo-Europeanta hi uz ur tgmtMalteseSemiticAfro-Asiaticfr it sh de es
psPashtoIranianIndo-Europeangu or ne pa ashaHausaWest ChadicAfro-Asiaticur tg az ug hi
tgTajikIranianIndo-Europeanur hi ug az ametEstonianFinnicUralicfi ru de cs uk
caCatalanRomanceIndo-Europeanes gl it pt srfiFinnishFinnicUralicet hu pl cs uk
esSpanishRomanceIndo-Europeanpt gl it ca frhuHungarianUgricUralicfi cs et pl sk
frFrenchRomanceIndo-Europeanit es pt ru dekmCentral KhmerKhmerAustro-Asiaticgu be nn fy oc
glGalicianRomanceIndo-Europeanpt es ca it roviVietnameseViet-MuongAustro-Asiaticms id th ko uk
itItalianRomanceIndo-Europeanes pt fr gl camgMalagasyBaritoAustronesianms id fr ru es
ocOccitanRomanceIndo-Europeanbe km fy se ptidIndonesianMalayo-SumbawanAustronesianms vi th mg uk
ptPortugueseRomanceIndo-Europeanes gl it ca frmsMalayMalayo-SumbawanAustronesianid vi th mg uk
roRomanianRomanceIndo-Europeanit es ca gl ptknKannadaSouthern DravidianDravidianor ne as pa kk
beBelorusianSlavicIndo-Europeanky ru kk km ocmlMalayalamSouthern DravidianDravidiansi ko ja bn ta
bgBulgarianSlavicIndo-Europeanka mk uk pl bstaTamilSouthern DravidianDravidianhi ml ur bn ku
bsBosnianSlavicIndo-Europeanhr sr sl pl mkteTeluguSouthern-central DravidianDravidianta ml or ne as
csCzechSlavicIndo-Europeansk sl pl hr bseuBasqueBasqueBasqueit et es pt ru
hrCroatianSlavicIndo-Europeanbs sr sl pl csmyBurmeseBurmese-LoloSino-Tibetankn or ta kk as
mkMacedonianSlavicIndo-Europeanbg ka bs sr hrzhChineseChineseSino-Tibetanlv ru lt fr bn
plPolishSlavicIndo-Europeancs sk uk sl bsthThaiKam-TaiTai-Kadaivi ko ms ja ml
ruRussianSlavicIndo-Europeanuk mk sk de bgltLithuanianBalticIndo-Europeanlv sh ru fr et
shSerbo-CroatianSlavicIndo-Europeanlv ru lt sk sllvLatvianBalticIndo-Europeanlt sh ru fr et
skSlovakSlavicIndo-Europeancs sl pl hr bskaGeorgianKartvelianKartvelianbg mk uk bs sr
slSlovenianSlavicIndo-Europeansk cs hr bs sreoEsperanto--it uk es cap
srSerbianSlavicIndo-Europeanbs hr sl mk plseNorthern Sami--fy km pa oc be
ukUkrainianSlavicIndo-Europeanpl ru mk bs bgwaWallon--ne oc fy km pa
+ +Table 10: Top-5 languages similar to anchor language according to the cosine similarity of feature proportions in the sCLM encoder on en-xx translation. The languages are categorized based on the typological knowledge base WALS (Dryer and Haspelmath, 2013). + +
CodeLanguageGenusFamilySimilar LanguagesCodeLanguageGenusFamilySimilar Languages
afAfrikaansGermanicIndo-Europeanfy li nl nn nbsqAlbanianAlbanianIndo-Europeanro et sl cs sk
daDanishGermanicIndo-Europeanno sv nb nn isbrBretonCelticIndo-Europeancy oc ku se wa
deGermanGermanicIndo-Europeannl da nb no svcyWelshCelticIndo-Europeanbr oc se ku af
fyWestern FrisianGermanicIndo-Europeanaf li nn oc nbgaIrishCelticIndo-Europeangd de nb oc se
isIcelandicGermanicIndo-Europeansv no da nb etgdGaelicCelticIndo-Europeanga oc cy se ig
liLimburganGermanicIndo-Europeanfy af wa nn ocelGreekGreekIndo-Europeanro ka he th no
nlDutchGermanicIndo-Europeanaf de da svjaJapaneseJapaneseJapanesezh ko ta th si
noNorwegianGermanicIndo-Europeanda sv nb nn iskoKoreanKoreanKoreanth ja si zh ta
nbNorwegian BokmålGermanicIndo-Europeannn da no sv afrwKinyarwandaBantoidNiger-Congotk li fy af zu
nnNorwegian NynorskGermanicIndo-Europeannb no da af svxhXhosaBantoidNiger-Congozh sh tk et mt
svSwedishGermanicIndo-Europeanda no nb is nnzuZuluBantoidNiger-Congoxh tk ig ku oc
yiYiddishGermanicIndo-Europeangu li af ky knigIgboIgboidNiger-Congozu tk gd rw li
asAssameseIndicIndo-Europeanbn gu hi he nnazAzerbaijaniTurkicAltaictr uz tk gu et
bnBengaliIndicIndo-Europeanas he gu si hikkKazakhTurkicAltaicky be ru uk fy
guGujaratiIndicIndo-Europeanne as hi bn pakyKyrgyzzTurkicAltaickk be ru uk uz
hiHindiIndicIndo-Europeanne mr gu cs sitkTurkmenTurkicAltaicku oc tr zu cy
mrMarathiIndicIndo-Europeanhi ne cs gu sktrTurkishTurkicAltaicaz tk eo et uz
neNepaliIndicIndo-Europeanhi gu mr nn pattTatarTurkicAltaicuz kk ky tg he
orOriyaIndicIndo-Europeanhi gu kn ko paugUyghurTurkicAltaicps ur hi uz ky
paPanjabiIndicIndo-Europeankm gu ne ko jauzUzbekTurkicAltaictg ky az tt tk
siSinhalaIndicIndo-Europeanml ko he th hiamAmharicSemiticAfro-Asiaticky gu uz az or
urUrduIndicIndo-Europeanfa he hi th ararArabicSemiticAfro-Asiaticfa he ur de th
faPersianIranianIndo-Europeanar ur th he deheHebrewSemiticAfro-Asiaticbn si ka ro bg
kuKurdishIranianIndo-Europeancy br oc se tkmtMalteseSemiticAfro-Asiaticit fr sh wa lv
psPashtoIranianIndo-Europeannn gu ug zu ochaHausaWest ChadicAfro-Asiatictg ig ku ms tk
tgTajikIranianIndo-Europeanuz be ky ru uketEstonianFinnicUralicfi ms id ro sq
caCatalanRomanceIndo-Europeanes gl pt it frfiFinnishFinnicUralicet eu no id hu
esSpanishRomanceIndo-Europeangl pt ca it frhuHungarianUgricUralicet fi eo cs es
frFrenchRomanceIndo-Europeanca pt es it glkmCentral KhmerKhmerAustro-Asiaticpa se gu oc nb
glGalicianRomanceIndo-Europeanpt es ca it frviVietnameseViet-MuongAustro-Asiaticms id et ka no
itItalianRomanceIndo-Europeanpt ca es gl frmgMalagasyBaritoAustronesiansh fr di ve
ocOccitanRomanceIndo-Europeanwa se cy gl caidIndonesianMalayo-SumbawanAustronesianms vi et he fi
ptPortugueseRomanceIndo-Europeangl es ca it frmsMalayMalayo-SumbawanAustronesianid vi et ka fi
roRomanianRomanceIndo-Europeanca pt gl es itknKannadaSouthern DravidianDravidiante or km nn ne
beBelorusianSlavicIndo-Europeanuk ru ky kk tgmlMalayalamSouthern DravidianDravidiansi ko vi ta hi
bgBulgarianSlavicIndo-Europeanmk ru uk ka hetaTamilSouthern DravidianDravidianko ga ja ml hi
bsBosnianSlavicIndo-Europeanhr sr sl sk shteTeluguSouthern-central DravidianDravidiankn vi hi ml ko
csCzechSlavicIndo-Europeansk sl pl hr bseuBasqueBasqueBasquefi eo id ms gl
hrCroatianSlavicIndo-Europeanbs sr sl sk shmyBurmeseBurmese-LoloSino-Tibetangu or eo oc tk
mkMacedonianSlavicIndo-Europeanbg ru uk ka hezhChineseChineseSino-Tibetanja th ko bn ta
plPolishSlavicIndo-Europeansk cs hr sl bsthThaiKam-TaiTai-Kadaiko zh si ru uk
ruRussianSlavicIndo-Europeanuk bg be mk kyltLithuanianBalticIndo-Europeanlv sh eo ru cs
shSerbo-CroatianSlavicIndo-Europeanhr sr bs sl sklvLatvianBalticIndo-Europeanlt sh et nb ru
skSlovakSlavicIndo-Europeancs sl pl hr bskaGeorgianKartvelianKartvelianbbg mk ru he nl
slSlovenianSlavicIndo-Europeanhr bs sr sk cseoEsperanto--ca es gl oc pt
srSerbianSlavicIndo-Europeanbs hr sl sk shseNorthern Sami--oc cy ku nn br
ukUkrainianSlavicIndo-Europeanru bg be mk thwaWallon--oc af ku nn li
+ +Table 11: Top-5 languages similar to anchor language according to the cosine similarity of feature proportions in the sCLM decoder on en-xx translation. + +![](images/c5049d1eb6d6360cf0bb14987be152b68f9fe8738995c831e979b392b12d5a3e.jpg) +(a) Multilingual baseline + +![](images/b0543a3ce417c4a116a9785b9beca93c20ac5749a0dae24750da718ec0d9a2b9.jpg) +(b) $sCLM$ +Figure 5: t-SNE visualizations of the encoder representations of 19 med-resource languages on xx-en translation encoded by Multilingual baseline, sCLM and mCLM. + +![](images/539805563732533956c5f22ed9b89fa2a09e852a7d1e0764bb331a7279c62ae7.jpg) +(c) $mCLM$ + +![](images/69b350b8cd1ead394ff2110d08762f5c8bb631865973d67bc4d74baa88c19e87.jpg) +(a) Multilingual baseline +Figure 6: t-SNE visualizations of the encoder representations of 41 high-resource languages on xx-en translation encoded by Multilingual baseline, sCLM and mCLM. + +![](images/08e66bfd207a30bb8a2c3660055569fe73a2f3cb03d13f0ea999dca932b14573.jpg) +(b) $sCLM$ + +![](images/db8cee1734288a11743c7408d03b74d45b8dab10697809b6dc0787fa0af6e37c.jpg) +(c) $mCLM$ \ No newline at end of file diff --git a/adaptivetokenlevelcrosslingualfeaturemixingformultilingualneuralmachinetranslation/images.zip b/adaptivetokenlevelcrosslingualfeaturemixingformultilingualneuralmachinetranslation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..2b93517801f2aedb177e25ee1b93b2e9028a8a81 --- /dev/null +++ b/adaptivetokenlevelcrosslingualfeaturemixingformultilingualneuralmachinetranslation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:68d6be1f1de2a7d4bb7918ddc299bb64db172aff36a065adbc92e74d06066bd7 +size 1065730 diff --git a/adaptivetokenlevelcrosslingualfeaturemixingformultilingualneuralmachinetranslation/layout.json b/adaptivetokenlevelcrosslingualfeaturemixingformultilingualneuralmachinetranslation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..eb05c0d2c252df8fefe491472536851e8d3934f9 --- /dev/null +++ b/adaptivetokenlevelcrosslingualfeaturemixingformultilingualneuralmachinetranslation/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e9aea0325fe28da5d59a641bc8017d5418e01bfdcd8b8973b0ee42e091448562 +size 528433 diff --git a/adatasetforhyperrelationalextractionandacubefillingapproach/d47ed940-eca4-4c50-a299-8f871bfeeaee_content_list.json b/adatasetforhyperrelationalextractionandacubefillingapproach/d47ed940-eca4-4c50-a299-8f871bfeeaee_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..bb53f21be029d561b7c1404ad7a58eddf00d0555 --- /dev/null +++ b/adatasetforhyperrelationalextractionandacubefillingapproach/d47ed940-eca4-4c50-a299-8f871bfeeaee_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0da4de2472afbe39603d27673290c8c823720ca22afe4a91ba6e6f6bcaebe578 +size 134581 diff --git a/adatasetforhyperrelationalextractionandacubefillingapproach/d47ed940-eca4-4c50-a299-8f871bfeeaee_model.json b/adatasetforhyperrelationalextractionandacubefillingapproach/d47ed940-eca4-4c50-a299-8f871bfeeaee_model.json new file mode 100644 index 0000000000000000000000000000000000000000..9ea30d642c9ae2d753ac77ec383228d0295817dc --- /dev/null +++ b/adatasetforhyperrelationalextractionandacubefillingapproach/d47ed940-eca4-4c50-a299-8f871bfeeaee_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a6a045d313af5ac19bf37f0bc5bc2bec6cd05bbe2ba3a4b4716b4e7cceb29013 +size 157160 diff --git a/adatasetforhyperrelationalextractionandacubefillingapproach/d47ed940-eca4-4c50-a299-8f871bfeeaee_origin.pdf b/adatasetforhyperrelationalextractionandacubefillingapproach/d47ed940-eca4-4c50-a299-8f871bfeeaee_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..e38671b95778fd3d03203539942f3c448f69fd5c --- /dev/null +++ b/adatasetforhyperrelationalextractionandacubefillingapproach/d47ed940-eca4-4c50-a299-8f871bfeeaee_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1cd125a1a7b7a83597baf3f502a2dc0f7bf3e726fcb566f1b4b2ae4e0171eea7 +size 509347 diff --git a/adatasetforhyperrelationalextractionandacubefillingapproach/full.md b/adatasetforhyperrelationalextractionandacubefillingapproach/full.md new file mode 100644 index 0000000000000000000000000000000000000000..efc67ac500f0106838c67958e3274b0822031608 --- /dev/null +++ b/adatasetforhyperrelationalextractionandacubefillingapproach/full.md @@ -0,0 +1,444 @@ +# A Dataset for Hyper-Relational Extraction and a Cube-Filling Approach + +Yew Ken Chia\*1, Lidong Bing†1 Sharifah Mahani Aljunied Luo Si1 Soujanya Poria + +$^{1}$ DAMO Academy, Alibaba Group $^{\text{®}}$ Singapore University of Technology and Design {yewken.chia, l.bing, mahani.aljunied, luo.si}@alibaba-inc.com {yewken.chia, sporia}@sutd.edu.sg + +# Abstract + +Relation extraction has the potential for large-scale knowledge graph construction, but current methods do not consider the qualifier attributes for each relation triplet, such as time, quantity or location. The qualifiers form hyper-relational facts which better capture the rich and complex knowledge graph structure. For example, the relation triplet (Leonard Parker, Educated At, Harvard University) can be factually enriched by including the qualifier (End Time, 1967). Hence, we propose the task of hyper-relational extraction to extract more specific and complete facts from text. To support the task, we construct HyperRED, a large-scale and general-purpose dataset. Existing models cannot perform hyper-relational extraction as it requires a model to consider the interaction between three entities. Hence, we propose CubeRE, a cube-filling model inspired by table-filling approaches and explicitly considers the interaction between relation triplets and qualifiers. To improve model scalability and reduce negative class imbalance, we further propose a cube-pruning method. Our experiments show that CubeRE outperforms strong baselines and reveal possible directions for future research. Our code and data are available at github.com/declare-lab/HyperRED. + +# 1 Introduction + +Knowledge acquisition is an open challenge in artificial intelligence research (Lenat, 1995). The standard form of representing the acquired knowledge is a knowledge graph (Hovy et al., 2013), which has broad applications such as question answering (Yih and Ma, 2016; Chia et al., 2020) and search engines (Xiong et al., 2017). Relation extraction (RE) is a task that has the potential for large-scale and automated knowledge graph construction by extracting facts from natural language text. Most + +![](images/a18fae4b7ed3d391538300ac431c9c1a20062772b24fdd43bbe673f2fbc9f5f0.jpg) +Figure 1: A sample from our HyperRED dataset for the proposed task of hyper-relational extraction. + +relation extraction methods focus on binary relations (Bach and Badaskar, 2007) which consider the relationship between two entities, forming a relation triplet consisting of the head entity, relation and tail entity respectively. + +However, knowledge graphs commonly contain hyper-relational facts (Guan et al., 2019) which have qualifier attributes for each relational triplet, such as time, quantity, or location. For instance, Wen et al. (2016) found that the Freebase knowledge graph contains hyper-relational facts for $30\%$ of entities. Hence, extracting relation triplets may be an oversimplification of the rich and complex knowledge graph structure. As shown in Figure 1, a relation triplet can be attributed to one or more qualifiers, where a qualifier is composed of a qualifier label and value entity. For example, the relation triplet (Leonard Parker, Educated At, Harvard University) can be factually enriched by specifying the qualifier of (End Time, 1967), forming the hyper-relational fact (Leonard Parker, Educated At, Harvard University, End Time, 1967). + +Hyper-relational facts generally cannot be simplified into the relation triplet format as the qualifiers are attributed to the triplet as a whole and not targeted at a specific entity in the triplet. Furthermore, attempting to decompose the hyperrelational structure to an n-ary format would lose + +the original triplet information and be incompatible with the knowledge graph schema (Rosso et al., 2020). On the other hand, hyper-relational facts have practical benefits such as improved fact verification (Thorne et al., 2018) and representation learning for knowledge graphs (Galkin et al., 2020). Thus, it is necessary to extract relation triplets together with qualifiers to form hyper-relational facts. + +In this work, we propose the task of hyper-relational extraction to jointly extract relation triplets with qualifiers from natural language sentences. To support the task, we contribute a general-purpose and large-scale hyper-relational extraction dataset (HyperRED) which is constructed through distant supervision (Mintz et al., 2009) and partially refined through human annotation. Our dataset differs from previous datasets in two distinct ways: (1) Compared to existing datasets for binary relation extraction (Zhang et al., 2017; Han et al., 2018), HyperRED enables richer information extraction as it contains qualifiers for each relation triplet in the sentence. (2) While datasets for n-ary relation extraction (Jia et al., 2019) are restricted to the biomedical domain, HyperRED covers multiple domains and has a hyper-relational fact structure that is compatible with the knowledge graph schema. + +Unfortunately, to the best of our knowledge, there are no existing models for hyper-relational extraction. Currently, a popular end-to-end method for binary relation extraction is to cast it as a table-filling problem (Miwa and Sasaki, 2014). Generally, a two-dimensional table is used to represent the interaction between any two individual words in a sentence. However, hyper-relational extraction requires the model to consider the interactions between two entities in the relation triplet, as well as the value entity for the qualifier. Thus, we extend the table-filling approach to a third dimension, casting it as a cube-filling problem. On the other hand, a naive cube-filling approach faces two issues: (1) Computing the full cube representation is computationally expensive and does not scale well to longer sequence lengths. (2) The full cube will be sparsely labeled with a vast majority of entries as negative samples, causing the model to be biased in learning (Li et al., 2020) and hence underperform. + +To tackle these two issues, we propose a simple yet effective cube-pruning technique that filters the cube entries based on words that are more likely to constitute valid entities. Our experiments show that cube-pruning significantly improves the com + +putational efficiency and simultaneously improves the extraction performance by reducing the negative class imbalance. In addition to our cube-filling model which we refer to as CubeRE, we also introduce two strong baseline models which include a two-stage pipeline and a generative sequence-to-sequence (Sutskever et al., 2014) model. + +In summary, our main contributions include: (1) We propose the task of hyper-relational extraction to extract richer and more complete facts by jointly extracting each relation triplet with the corresponding qualifiers; (2) To support the task, we provide a large-scale and general-purpose dataset known as HyperRED. (3) As there is no existing model for hyper-relational extraction, we propose a cubefilling model known as CubeRE, which consistently outperforms baseline extraction methods. + +# 2 HyperRED: A Hyper-Relational Extraction Dataset + +Our goal is to construct a large-scale and general-purpose dataset for extracting hyper-relational facts from natural language text. However, it is seldom practical to assume to have an ample amount of high-quality labeled samples in real applications, especially for complex tasks such as information extraction. Hence, we propose a weakly supervised (Craven and Kumlien, 1999) data setting which enables us to collect a larger and more diverse training set than would be otherwise possible. To minimize the effect of noisy samples in evaluation, we then perform human annotation for a portion of the collected data and allocate it as the held-out set. In the following sections, we first introduce the process of collecting the distantly supervised data, followed by the human-annotated data portion. + +# 2.1 Distantly Supervised Data Collection + +To collect a large and diverse dataset of sentences with hyper-relational facts, we employ distant supervision which falls under the weakly supervised setting. Distant supervision automatically collects a dataset of relational facts by aligning a text corpus with facts from an existing knowledge graph. Similar to Elsahar et al. (2018), we first extract and link entities from the corpus to an existing knowledge graph, and resolve any coreference cases to the previously linked entities. To align hyper-relational facts from the knowledge graph to the text corpus, we detect if the entities that comprise each fact are also present in each sentence. Each sentence with + +
TypeProportionExample SentenceHyper-Relational Facts
Time48%Tennyson was an ASCAP member from 1950.(Tennyson, member of, ASCAP, start time, 1950)
Quantity19%Szewczyk played 37 times for Poland, scoring 3 goals.(Szewczyk, member of sports team, Poland, number of matches played, 37) +(Szewczyk, member of sports team, Poland, number of points, 3)
Role12%John Sculley is a former Apple CEO.(John Sculley, employer, Apple, position held, CEO)
Part-Whole11%The Ohio Senate is the upper house of the Ohio General Assembly, the Ohio state legislature.(Ohio, legislative body, Ohio General Assembly, has part, Ohio Senate)
Location9%Donner was elected at the 1931 election as Conservative MP for Islington West.(Donner, candidacy in election, 1931 election, electoral district, Islington West)
+ +Table 1: General typology and distribution of frequent qualifier labels for the HyperRED dataset, shown with example sentences and the corresponding hyper-relational facts. + +aligned facts is collected as part of the distantly supervised dataset. To ensure that the large-scale text corpus can be well-aligned with the knowledge graph, we perform distant supervision between English Wikipedia and Wikidata (Erxleben et al., 2014), which is the central knowledge graph for Wikipedia. Following Elsahar et al. (2018), we use the introduction sections of Wikipedia articles as the text corpus as they generally contain the most important information. + +Entity Extraction and Linking The distant supervision process relies on matching entities in a sentence with facts from the knowledge graph. To detect and identify the named entities in the articles, we use the DBpedia Spotlight (Mendes et al., 2011) entity linker. For the extraction of temporal and numerical entities, we use the spaCy tool. + +Coreference Resolution As Wikipedia articles often use pronouns to refer to entities across sentences, it is necessary to resolve such references. We employ the Stanford CoreNLP tool (Manning et al., 2014) for this task. + +Hyper-Relational Alignment To extend the distant supervision paradigm to hyper-relational facts, we jointly match based on the entities that comprise each hyper-relational fact. Formally, let $f = (e_{head}, r, e_{tail}, q, e_{value})$ be a possible hyperrelation fact consisting of the head entity, relation, tail entity, qualifier label and value entity, respectively. Given a corpus of text articles, each article contains a set of sentences $\{s_i, \dots, s_n\}$ , where each sentence $s_i$ has $E_i$ entities that are linked to the knowledge graph. For each hyper-relational fact $f$ in the knowledge graph, it is aligned to the sentence $s_i$ if the head entity $e_{head}$ , tail entity $e_{tail}$ and value entity $e_{value}$ are all linked in the sentence. Hence, we obtain a set of aligned facts for each sentence: $\{(s_i, f) | e_{head} \in E_i, e_{tail} \in E_i, e_{value} \in E_i\}$ . + +Following Riedel et al. (2010), we remove any sentence that does not contain aligned facts. + +# 2.2 Human-Annotated Data Collection + +Although distant supervision can align a large amount of hyper-relational facts, the process can introduce noise in the dataset due to possible spurious alignments and incompleteness of the knowledge graph (Nickel et al., 2016). However, it is not feasible to completely eliminate such noise from the dataset due to the annotation time and budget constraints. Hence, we select a portion of the distantly supervised data to be manually labeled by human annotators. To provide a solid evaluation setting for future research works, the human-annotated data will be used as the development and testing set. We include the development set in the annotated portion as it is necessary for hyperparameter tuning and model selection. + +The goal of the human annotation stage is to identify correct alignments and remove invalid alignments. During the process, the annotators are tasked to review the correctness of each aligned fact, where an aligned fact consists of the sentence $s_i$ and hyper-relational fact $f$ . The alignment may be invalid if the relation triplet of the fact is not semantically expressed in the sentence, based on the Wikidata relation meaning. For instance, given the sentence "Prince Koreyasu was the son of Prince Munetaka who was the sixth shogun.", the relation triplet (Prince Koreyasu, Occupation, shogun) is considered invalid as the sentence did not explicitly state if "Prince Koreyasu" became a shogun. Similarly, the alignment may be invalid if the qualifier of the fact is not semantically expressed in the sentence, based on the Wikidata definition of the qualifier label. For example, given the sentence "Robin Johns left Northamptonshire at the end of the 1971 season.", the hyper-relational fact (Robin Johns, member of sports team, Northamptonshire, Start Time, 1971) has an invalid qualifier as the + +
Dataset#Train#Dev#Test#Facts|R||Q|
TACRED37,31110,2336,27768,586410
NYT2456,1965,0005,00017,624240
NYT2963,3067,0334,00618,479290
HyperRED39,8401,0004,00044,3726244
+ +Table 2: Comparison of existing sentence-level datasets with HyperRED. "Fact" denotes the unique facts, $|R|$ and $|Q|$ denote the unique relation labels and qualifier labels, respectively. To our knowledge, HyperRED is the first RE dataset to include hyper-relational facts. + +label should be changed to "End Time". Hence, the annotation is posed as a multi-class classification over each alignment with three classes: "correct", "invalid triplet" or "invalid qualifier". Appendix A has the annotation guide and data samples. + +Each alignment sample is annotated by two professional annotators working independently. There are 6780 sentences annotated in total and the inter-annotator agreement is measured using Cohen's kappa with a value of 0.56. The kappa value is comparable with previous relation extraction datasets (Zhang et al., 2017), demonstrating that the annotations are of reasonably high quality. For each sample with disagreement, a third annotator is brought to judge the final result. We observe that $76\%$ of samples are annotated as "correct", which indicates a reasonable level of accuracy in the distantly supervised data. To reduce the long-tailed class imbalance (Zhang et al., 2019), we use a filter to ensure that all relation and qualifier labels have at least ten occurrences in the dataset. Although it can be more realistic to include challenging samples such as long-tailed class samples or negative samples in the dataset, we aim to address such challenges in a future dataset version release. + +# 2.3 Data Analysis + +To provide a better understanding of the HyperRED dataset, we analyze several aspects of the dataset. + +Qualifier Typology The qualifiers of the hyperrelational facts can be grouped into several broad categories as shown in Table 1. Notably, the majority of the qualifiers fall under the "Time" category, as it can be considered a fundamental attribute of many facts. The remaining qualifiers are distributed among the "Quantity", "Role", "Part-Whole" and "Location" categories. Hence, the HyperRED dataset is able to support a diverse typology of hyper-relational facts. + +Size and Coverage The statistics of HyperRED are shown in Table 2. We find that in terms of size + +and number of relation types, HyperRED is comparable to existing sentence-level datasets, such as TACRED (Zhang et al., 2017), NYT24 and NYT29 (Nayak and Ng, 2020). Table 1 also demonstrates that HyperRED can serve as a general-purpose dataset, covering several domains such as business, sports and politics. Appendix C has more details. + +# 3 CubeRE: A Cube-Filling Approach + +# 3.1 Task Formulation + +Hyper-Relational Extraction Given an input sentence of $n$ words $s = \{x_1, x_2, \ldots, x_n\}$ , an entity $e$ is a consecutive span of words where $e = \{x_i, x_{i+1}, \ldots, x_j\}$ , $i, j \in \{1, \ldots, n\}$ . For each sentence $s$ , the output of a hyper-relational extraction model is a set of facts where each fact consists of a relation triplet with an attributed qualifier. A relation triplet consists of the relation $r \in R$ between head entity $e_{head}$ and tail entity $e_{tail}$ where $R$ is the predefined set of relation labels. The qualifier is an attribute of the relation triplet and is composed of the qualifier label $q \in Q$ and the value entity $e_{value}$ , where $Q$ is the predefined set of qualifier labels. Hence, a hyper-relational fact has five components: $(e_{head}, r, e_{tail}, q, e_{value})$ . + +Cube-Filling Inspired by table-filling approaches which can naturally perform binary relation extraction in an end-to-end fashion, we cast hyper-relational extraction as a cube-filling problem, as shown in Figure 2. The cube contains multiple planes where the front-most plane is a two-dimensional table containing the entity and relation label information, while the following planes contain the corresponding qualifier information. Each entry on the table diagonal represents a possible entity, while each entry outside the table diagonal represents a possible relation triplet. For example, the entry "Educated At" represents a relation between the head entity "Parker" and the tail entity "Harvard". Each table entry $y_{ij}^{t}$ can contain the null label $\bot$ , an entity or relation label, i.e., $y_{ij}^{t} \in Y^{t} = \{\bot, \text{Entity}\} \cup R$ . + +The following planes in the cube represent the qualifier dimension, where each entry represents a possible qualifier label and value entity word for the corresponding relation triplet. For instance, the entry "Academic Degree" in the qualifier plane for "PhD" corresponds to the relation triplet (Parker, Educated At, Harvard), hence forming the hyperrelational fact (Parker, Educated At, Harvard, Aca + +demic Degree, PhD). Each qualifier entry $y_{ijk}^{q}$ can contain the null label $\bot$ or a qualifier label, i.e., $y_{ijk}^{q} \in Y^{q} = \{\bot\} \cup Q$ . Note that the cube-filling formulation also supports hyper-relational facts that share the same relation triplet, as the different qualifiers can occupy separate planes in the qualifier dimension and still correspond to the same relation triplet entry. + +# 3.2 Model Architecture + +Our model known as CubeRE first encodes each input sentence using a language model encoder to obtain the contextualized sequence representation. We then capture the interaction between each possible head and tail entity as a pair representation for predicting the entity-relation label scores. To reduce the computational cost, each sentence is pruned to retain only words that have higher entity scores. Finally, we capture the interaction between each possible relation triplet and qualifier to predict the qualifier label scores and decode the outputs. + +# 3.2.1 Sentence Encoding + +To encode a contextualized representation for each word in a sentence $s$ , we use the pre-trained BERT (Devlin et al., 2019) language model: + +$$ +\left\{h _ {1}, h _ {2}, \dots , h _ {n} \right\} = \operatorname {B E R T} \left(\left\{x _ {1}, x _ {2}, \dots , x _ {n} \right\}\right) \tag {1} +$$ + +where $h_i$ denotes the contextualized representation of the i-th word in the sentence. + +# 3.2.2 Entity-Relation Representation + +To capture the interaction between head and tail entities, we concatenate each possible pair of word representations and project with a dimensionreducing feed-forward network (FFN): + +$$ +g _ {i j} = \mathrm {F F N} _ {\text {p a i r}} \left(h _ {i} \oplus h _ {j}\right) \tag {2} +$$ + +Thus, we construct the table of categorical probabilities over entity and relation labels by applying an FFN and softmax over the pair representation: + +$$ +P \left(\hat {y} _ {i j} ^ {t}\right) = \operatorname {S o f t m a x} \left(\mathrm {F F N} _ {t} \left(g _ {i j}\right)\right) \tag {3} +$$ + +where $\hat{y}_{ij}^{t}$ denotes the predicted table entry corresponding to the relation between the i-th possible head entity word and j-th possible tail entity word. Note that we use the concatenation operation in Equation 2 instead of the averaging operation or other representation methods (Baldini Soares et al., 2019) as the concatenation operation is simple and shown to be effective in recent RE works (Wang et al., 2021a; Wang and Lu, 2020). + +![](images/d6ac3fc464f2e9a3713685e5e90cd9d026045856023d1bba4caef8da7fb07461.jpg) +Figure 2: An example of cube-filling for hyperrelational extraction. The front-most plane is a two-dimensional table that contains entity and relation information. It extends to the third dimension where each plane represents a possible qualifier label and value entity word that corresponds to the relation triplet entry. + +# 3.2.3 Cube-Pruning + +To predict the qualifier of a hyper-relational fact, the model needs to consider the interaction between each possible relation triplet and value entity, where the relation triplet contains a head entity and a tail entity. For a sentence with $n$ words, there are $n^3$ interactions that do not scale well for longer input sequences. Hence, we propose a cube-pruning method to consider only interactions between the top $m$ words in terms of entity score. Consequently, the model will only consider the interaction between the top- $m$ most probable words of the potential head entities, tail entities, and value entities respectively. This reduces the number of interactions to $m^3$ where $m$ is a fixed hyperparameter. The cube-pruning method also has the benefit of alleviating the negative class imbalance by reducing the proportion of entries with the null label, and we analyze this effect in Section 5.1. To detect the most probable entity words, we obtain the respective entity scores from the diagonal of the table $\hat{y}^t$ containing the entity and relation scores (i.e., the front-most plane in Figure 2): + +$$ +\Phi_ {i} ^ {\text {e n t i t y}} = P \left(\hat {y} _ {i i} ^ {t}\right), i \in \{1, \dots , n \} \tag {4} +$$ + +The entity scores are then ranked to obtain the pruned indices $\{1,\dots,m\}$ which will be applied to each dimension of the cube representation. + +To capture the hyper-relational structure between relation triplets and qualifier attributes, we use a bilinear interaction layer between each possible pair representation and word representation. The categorical probability distribution over qualifier labels for each possible relation triplet and value entity is then computed as: + +$$ +P \left(\hat {y} _ {i ^ {\prime} j ^ {\prime} k ^ {\prime}} ^ {q}\right) = \operatorname {S o f t m a x} \left(g _ {i ^ {\prime} j ^ {\prime}} ^ {\intercal} U h _ {k ^ {\prime}}\right) \tag {5} +$$ + +where $i', j', k' \in \{1, \dots, m\}$ are the pruned indices and $U$ is a trainable bilinear weight matrix. + +# 3.2.4 Training Objective + +The training objective for the entity-relation table is computed using the negative log-likelihood as: + +$$ +\mathcal {L} _ {t} = - \frac {1}{n ^ {2}} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {n} \log P \left(\hat {y} _ {i j} ^ {t}\right) \tag {6} +$$ + +The training objective for the qualifier dimension is computed using the negative log-likelihood as: + +$$ +\mathcal {L} _ {q} = - \frac {1}{m ^ {3}} \sum_ {i ^ {\prime} = 1} ^ {m} \sum_ {j ^ {\prime} = 1} ^ {m} \sum_ {k ^ {\prime} = 1} ^ {m} \log P \left(\hat {y} _ {i ^ {\prime} j ^ {\prime} k ^ {\prime}} ^ {q}\right) \tag {7} +$$ + +To enable end-to-end training, the overall cubefilling objective is aggregated as the sum of losses: + +$$ +\mathcal {L} = \mathcal {L} _ {t} + \mathcal {L} _ {q} \tag {8} +$$ + +# 3.2.5 Decoding + +To decode the hyper-relational facts from the predicted scores, we implement a simple and efficient method and provide the pseudocode in Appendix D. As it is intractable to consider all possible solutions, a slight drop in decoding accuracy is acceptable. A key intuition is that if a valid qualifier exists, this indicates that a corresponding relation triplet also exists. Hence, we first decode the qualifier scores (Equation 5) to determine the span positions of the head entity, tail entity and value entity in each hyper-relational fact. Consequently, we can determine the relation and qualifier label from the corresponding entries in the relation scores (Equation 3) and qualifier scores respectively. + +To handle entities that may contain multiple words, we consider adjacent non-null qualifier entries to correspond to the same head entity, tail entity, and value entity, hence belonging to the same hyper-relational fact. This assumption holds true for $97.14\%$ of facts in the dataset. To find and merge the adjacent non-null entries, we use the + +nonzero operation which is more computationally efficient compared to nested for-loops. For each group of adjacent entries that correspond to the same hyper-relational fact, we determine the relation label by averaging the corresponding relation scores. Similarly, we determine the qualifier label by averaging the corresponding qualifier scores. When using cube-pruning, we map the pruned indices back to the original indices before decoding. Appendix E has the model speed comparison. + +# 4 Experiments + +# 4.1 Experimental Settings + +Evaluation Similar to other information extraction tasks, we use the Micro $F_{1}$ metric for evaluation on the development and test set. For a predicted hyper-relational fact to be considered correct, the whole fact $f = (e_{head}, r, e_{tail}, q, e_{value})$ must match the ground-truth fact in terms of relation label, qualifier label and entity bounds. + +Hyperparameters For the encoding module, we use the BERT language model, specifically the uncased base and large versions. We train for 30 epochs with a linear warmup for $20\%$ of training steps and a maximum learning rate of 5e-5. We employ AdamW as the optimizer and use a batch size of 32. For model selection and hyperparameter selection, we evaluate based on the $F_{1}$ on the development set. We use $m = 20$ for cube-pruning and Appendix B has more experimental details. + +# 4.2 Baseline Methods + +As there are no existing models for hyper-relational extraction, we introduce two strong baselines that leverage pretrained language models. The pipeline baseline is based on a competitive table-filling model for joint entity and relation extraction, while the generative baseline is extended from a state-of-the-art approach for end-to-end relation extraction. + +Pipeline Baseline As pipeline methods can serve as strong baselines for information extraction tasks (Zhong and Chen, 2021), we implement a pipeline method for hyper-relational extraction. Concretely, we first train a competitive relation extraction model architecture UniRE (Wang et al., 2021a) to extract relation triplets from each input sentence. Separately, we train a span extraction model based on BERT-Tagger (Devlin et al., 2019) that is conditioned on the input sentence and a relation triplet + +
ModelParametersDevTest
PrecisionRecallF1PrecisionRecallF1
Generative Baseline (Base)140M63.79 ± 0.2759.94 ± 0.6861.80 ± 0.3764.60 ± 0.4759.67 ± 0.3562.03 ± 0.21
Pipeline Baseline (Base)132M69.23 ± 0.3058.21 ± 0.5763.24 ± 0.4469.00 ± 0.4857.55 ± 0.1962.75 ± 0.29
CubeRE (Base)115M66.14 ± 0.8864.39 ± 1.2365.24 ± 0.8265.82 ± 0.8464.28 ± 0.2565.04 ± 0.29
Generative Baseline (Large)400M67.08 ± 0.4965.73 ± 0.7866.40 ± 0.4767.17 ± 0.4064.56 ± 0.5865.84 ± 0.25
CubeRE (Large)343M68.75 ± 0.8268.88 ± 1.0368.81 ± 0.4666.39 ± 0.9667.12 ± 0.6966.75 ± 0.65
+ +to extract the value entities and corresponding qualifier label. However, as both stages fine-tune a pretrained language model, the pipeline method doubles the number of trainable parameters compared to an end-to-end method which only fine-tunes one pretrained language model. To avoid an unfair comparison as larger models are more sample-efficient (Kaplan et al., 2020), we use DistilBERT (Sanh et al., 2019) in both stages of the pipeline. + +Generative Baseline Inspired by the flexibility of language models for complex tasks such as information extraction and controllable structure generation (Shen et al., 2022), we propose a generative method for hyper-relational extraction. Compared to a pipeline method, a generative method can perform hyper-relational extraction in an end-to-end fashion without task-specific modules (Paolini et al., 2021). Similar to existing generative methods for relation extraction (Huguet Cabot and Navigli, 2021; Chia et al., 2022), we use BART (Lewis et al., 2020) which takes the sentence as input and outputs a structured text sequence that is then decoded to form the extracted facts. For instance, given the sentence "Parker received his PhD from Harvard," the sequence-to-sequence model is trained to generate "Head Entity: Parker, Relation: educated at, Tail Entity: Harvard, Qualifier: academic degree, Value: PhD." The generated text is then decoded through simple text processing to form the hyper-relational fact (Parker, Educated At, Harvard, Academic Degree, PhD). + +# 4.3 Main Results + +We compare CubeRE with the baseline models and report the precision, recall, and $F_{1}$ scores with standard deviation in Table 3. The results demonstrate the general effectiveness of our model as CubeRE has consistently higher $F_{1}$ scores on both the base and large model settings. While the pipeline baseline relies on a two-stage approach that is prone to error propagation, CubeRE can perform hyper-relational extraction in an end-to-end fashion. Hence, CubeRE is able to detect more + +Table 3: Evaluation results for hyper-relational extraction on the HyperRED dataset. + +
ModelPrecisionRecallF1
Generative Baseline69.96 ± 0.3164.56 ± 0.2167.15 ± 0.09
Pipeline Baseline75.94 ± 0.6666.41 ± 0.7270.85 ± 0.13
CubeRE72.45 ± 0.6669.64 ± 0.5371.01 ± 0.16
+ +Table 4: Evaluation results on HyperRED considering only the triplet component of hyper-relational facts. + +valid hyper-relational facts, which is demonstrated by the higher recall and $F_{1}$ scores. Compared to the generative baseline, our cube-filling approach is able to explicitly consider the interaction between relation triplets and qualifiers to better extract hyper-relational facts. Furthermore, we argue that CubeRE is more interpretable than the generative baseline as it can compute the score for each possible relation triplet and qualifier. Hence, CubeRE can also be more controllable as it is possible to control the number of predicted facts by applying a threshold to the triplet and qualifier scores. + +# 4.4 Triplet-Based Evaluation + +To further investigate the differences in model performance, we also report the results when considering only the triplet component of hyper-relational facts in Table 4. The results show that CubeRE has comparable performance to the pipeline baseline when considering only relation triplets. Hence, this suggests that the performance improvement in hyper-relational extraction is most likely due to more accurate qualifier extraction. Compared to the pipeline baseline which has two separate encoders for triplet extraction and conditional qualifier extraction, CubeRE learns a shared representation of the input sentence that is guided by both the triplet and qualifier losses facilitating the interaction between relation triplets and qualifiers. The triplet-qualifier interaction is important as most qualifier labels are relatively relation-specific2. This allows CubeRE to extract the qualifiers more accurately, resulting in better overall performance. + +![](images/ad9aa261b17058c3bdab7be04dead9746a6da35cd6e2e558c91f189b2d260f55.jpg) +Figure 3: The effect of pruning threshold $m$ on Dev $F_{1}$ . The model without pruning is indicated as $m = \infty$ . + +# 5 Analysis + +In this section, we study the effect of cube-pruning and identify directions for future research. Further analysis is shown in Appendix F. + +# 5.1 Effect of Pruning + +In addition to improving the computational efficiency of CubeRE as discussed in Section 3.2.3, our cube pruning method may also improve the extraction performance of the model. During training, the cube-filling approach faces the issue of having mostly null entries, thus biasing the learning process with negative class imbalance (Li et al., 2020). By pruning the cube to consider only the entries associated with higher entity scores, the proportion of null entries is reduced, hence alleviating the class imbalance issue. This is supported by the trend in Figure 3, as relaxing the pruning threshold $m$ leads to reduced $F_{1}$ scores. On the other hand, overly strict pruning will reduce the recall, negatively affecting the overall performance. + +# 5.2 Model Performance Breakdown + +To identify directions for future research in hyperrelational extraction, we analyze the model performance separately for each general qualifier category. As shown in Table 4, there is a variance in model performance across qualifier categories that cannot be fully explained by their proportion in the dataset. For instance, although the "Time" category comprises a majority of the qualifiers, it does not have the highest performance. This suggests that future research may focus on areas such as temporal reasoning, which is an open challenge for language models (Vashishtha et al., 2020; Dhingra et al., 2022). In addition, CubeRE demonstrates strong performance across all categories which suggests that it can serve as a general extraction model for different qualifiers. + +# 6 Related Work + +Knowledge Graph Construction In addition to extraction from natural language text, the under + +![](images/5285612ed3eff56abd07584ffe5723e22fb7553a6723e4469d6a7993618184a2.jpg) +Figure 4: Model performance breakdown based on the general categories of qualifiers as shown in Table 1. + +lying facts for knowledge graphs can also be extracted from semi-structured websites (Lockard et al., 2018), tables (Dong et al., 2020) or link prediction (Wang et al., 2017). However, textual extraction may be a more pressing challenge due to the vast amount of unstructured textual data on the web (Lockard et al., 2020). Hence, this work focuses on extracting facts from unstructured text. + +Relation Extraction Although relation extraction is a well-established task, most methods only consider the relation between two entities. There have been several directions to extract more complex facts, such as n-ary relation extraction or document-level relation extraction (Yao et al., 2019). However, n-ary relation extraction (Jia et al., 2019; Akimoto et al., 2019) has a limited scope as the available datasets address the biomedical domain. On the other hand, document-level (Tan et al., 2022a) and cross-document relation extraction (Yao et al., 2021) are fundamentally limited by the binary relation structure which does not consider hyper-relational information. Although dialogue-level relation extraction (Chen et al., 2020) may have a more complex structure consisting of utterances and speaker information, current datasets (Welleck et al., 2019) focus on the binary relation format. Hence, we propose to fill the gap by contributing HyperRED, a general-purpose and large-scale dataset for hyper-relational extraction that is not limited to any specific domain. + +Information Extraction In this work, we focus on relation extraction which falls under the broad scope of information extraction (Bing et al., 2015). Hence, a possible future direction is to adapt CubeRE for extracting other types of information such as attributes (Bing et al., 2013), events (Wang et al., 2021b), arguments (Cheng et al., 2020, 2022), aspect-based sentiment (Xu et al., 2021; Yu Bai Jian et al., 2021), commonsense knowledge (Ghosal et al., 2021), or visual scene relations (Andrews et al., 2019). Additionally, as HyperRED + +relies on distant supervision for dataset construction, it is necessary to further explore how to mitigate the noise in distantly supervised datasets for information extraction tasks (Nayak et al., 2021). + +Table-Filling Table-Filling is a popular approach for entity and relation extraction tasks (Miwa and Sasaki, 2014; Gupta et al., 2016; Zhang et al., 2017). It has several advantages including interpretability and an end-to-end formulation. Hence, table-filling approaches are able to avoid the cascading error propagation faced by pipeline models, despite a compact parameter set. Inspired by the benefits of table-filling, we extend the approach to cube-filling to extract hyper-relational facts by considering qualifiers for each relation triplet. To our knowledge, our proposed model is the first cube-filling approach for information extraction tasks. + +# 7 Conclusions + +In this work, we propose the hyper-relational extraction task for extracting richer and more complete facts from natural text. To support the task, we introduce HyperRED, a large-scale and general-purpose dataset that is not restricted to any specific domain. As there is no available model for hyper-relational extraction, we propose an end-to-end cube-filling approach inspired by table-filling methods for relation extraction. We further propose a cube-pruning method to reduce computational cost and alleviate negative class imbalance during training. Experiments on HyperRED demonstrate the effectiveness of CubeRE compared to strong baselines, setting the benchmark for future work. + +# Limitations + +Model Limitations Regarding the CubeRE model, we propose a cube-pruning method to improve the computational efficiency and reduce the negative class imbalance. The cube-pruning threshold is fixed, although the input can have different sentence lengths. Hence, it may result in overly strict pruning if the sentence is extremely long. However, the pruning threshold is similar to the maximum sequence length in most transformer-based models and may need to be tuned according to the specific dataset or application scenario. The optimal cube-pruning threshold is selected based on the analysis in Section 5.1. CubeRE may not work well for overlapping or nested entity spans, which affects $2.11\%$ of the sentences. This can + +be considered a general limitation of table-filling methods for relation extraction, and future work may need to consider a span-based approach (Xu et al., 2021) to address this issue. + +Data Limitations Regarding the HyperRED dataset, the distant supervision method of data collection may not align all valid facts present in the text articles. This is due to the possible incompleteness of the knowledge graph which is an open research challenge (Nickel et al., 2016). On the other hand, it is not feasible to manually annotate all possible facts due to constraints in annotation time and cost. Furthermore, there are a large number of relation and qualifier labels to consider, resulting in a challenging task for human annotators. A promising and practical method to address the challenges in distant supervision is to adopt a human-in-the-loop annotation scheme for RE (Tan et al., 2022b). The annotation scheme can increase the number of facts in a dataset by training a RE model to predict more candidate facts for each text article, which are then reviewed and filtered by humans. However, this model-assisted annotation approach is not applicable to the construction of HyperRED as it relies on existing strong RE models, whereas there are no suitable models for hyper-relational extraction existing prior to this work. + +# Ethics Statement + +Model Ethics Regarding the model generalization, we expect that the models introduced should perform similarly for factual text articles such as news articles from various domains, similar to the proposed dataset. However, it may not perform well for more casual text formats such as chat discussions or opinion pieces. On the other hand, we note that the models extract hyper-relational facts from the input sentences and do not guarantee the factual correctness of the extracted facts. This is an ethical consideration of RE models in general and further fact verification (Nie et al., 2019) modules are necessary before the facts can be integrated into knowledge graphs or downstream applications. + +Data Ethics For the dataset construction, we collect texts and facts from Wikipedia and Wikidata respectively, which is a common practice for distantly supervised datasets. Wikidata facts are under the public domain3 while Wikipedia texts are + +licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License4. Hence, we are free to adapt the texts to construct our dataset, which will also be released under the same license. For the human data annotation stage, we employ two professional data annotators, and they have been fairly compensated. The compensation is negotiated based on the task complexity and assessment of the reasonable annotation speed. Based on the agreed annotation scheme, each annotation batch is required to undergo quality checking where a portion of samples are manually checked. If any batch does not meet the acceptance criteria of $95\%$ accuracy, the annotators are required to fix the errors before the batch can be accepted. The overall quality of the dataset is evaluated in Section 2.1 and Section 2.2, and we analyze the dataset characteristics in Section 2.3, with further analysis in Section F. + +# References + +Kosuke Akimoto, Takuya Hiraoka, Kunihiko Sadamasa, and Mathias Niepert. 2019. Cross-sentence n-ary relation extraction using lower-arity universal schemas. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6225-6231, Hong Kong, China. Association for Computational Linguistics. +Martin Andrews, Yew Ken Chia, and Sam Witteveen. 2019. Scene graph parsing by attention graph. In Proceedings of the Second Workshop on Visually Grounded Interaction and Language (ViGIL) at NeurIPS 2018. +Nguyen Bach and Sameer Badaskar. 2007. A review of relation extraction. Literature review for Language and Statistics II, 2:1-15. +Livio Baldini Soares, Nicholas FitzGerald, Jeffrey Ling, and Tom Kwiatkowski. 2019. Matching the blanks: Distributional similarity for relation learning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2895-2905, Florence, Italy. Association for Computational Linguistics. +Lidong Bing, Sneha Chaudhari, Richard Wang, and William Cohen. 2015. Improving distant supervision for information extraction using label propagation through lists. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 524-529, Lisbon, Portugal. Association for Computational Linguistics. +Lidong Bing, Wai Lam, and Tak-Lam Wong. 2013. Wikipedia entity expansion and attribute extraction from the web using semi-supervised learning. In Proceedings of the Sixth ACM International Conference on Web Search and Data Mining, WSDM '13, page 567-576, New York, NY, USA. Association for Computing Machinery. +Hui Chen, Pengfei Hong, Wei Han, Navonil Majumder, and Soujanya Poria. 2020. Dialogue relation extraction with document-level heterogeneous graph attention networks. CoRR, abs/2009.05092. +Liying Cheng, Lidong Bing, Ruidan He, Qian Yu, Yan Zhang, and Luo Si. 2022. IAM: A comprehensive and large-scale dataset for integrated argument mining tasks. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2277-2287, Dublin, Ireland. Association for Computational Linguistics. +Liying Cheng, Lidong Bing, Qian Yu, Wei Lu, and Luo Si. 2020. APE: Argument pair extraction from peer review and rebuttal via multi-task learning. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7000-7011, Online. Association for Computational Linguistics. + +Yew Ken Chia, Lidong Bing, Soujanya Poria, and Luo Si. 2022. RelationPrompt: Leveraging prompts to generate synthetic data for zero-shot relation triplet extraction. In Findings of the Association for Computational Linguistics: ACL 2022, pages 45-57, Dublin, Ireland. Association for Computational Linguistics. +Yew Ken Chia, Sam Witteveen, and Martin Andrews. 2020. Red dragon AI at TextGraphs 2020 shared task : LIT : LSTM-interleaved transformer for multi-hop explanation ranking. In Proceedings of the Graph-based Methods for Natural Language Processing (TextGraphs), pages 115-120, Barcelona, Spain (Online). Association for Computational Linguistics. +Mark Craven and Johan Kumlien. 1999. Constructing biological knowledge bases by extracting information from text sources. In Proceedings of the Seventh International Conference on Intelligent Systems for Molecular Biology, page 77-86. AAAI Press. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. +Bhuwan Dhingra, Jeremy R. Cole, Julian Martin Eisenschlos, Daniel Gillick, Jacob Eisenstein, and William W. Cohen. 2022. Time-aware language models as temporal knowledge bases. Transactions of the Association for Computational Linguistics, 10:257-273. +Xin Luna Dong, Hannaneh Hajishirzi, Colin Lockard, and Prashant Shiralkar. 2020. Multi-modal information extraction from text, semi-structured, and tabular data on the web. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts, pages 23–26, Online. Association for Computational Linguistics. +Hady Elsahar, Pavlos Vougiouklis, Arslen Remaci, Christophe Gravier, Jonathon Hare, Frederique Laforest, and Elena Simperl. 2018. T-REx: A large scale alignment of natural language with knowledge base triples. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA). +Fredo Erxleben, Michael Günther, Markus Krötzsch, Julian Mendez, and Denny Vrandecic. 2014. Introducing wikidata to the linked data web. In *The Semantic Web - ISWC* 2014 - 13th International Semantic Web Conference, Riva del Garda, Italy, October 19-23, 2014. Proceedings, Part I, volume 8796 of Lecture Notes in Computer Science, pages 50-65. Springer. +Mikhail Galkin, Priyansh Trivedi, Gaurav Maheshwari, Ricardo Usbeck, and Jens Lehmann. 2020. Message + +passing for hyper-relational knowledge graphs. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7346-7359, Online. Association for Computational Linguistics. +Deepanway Ghosal, Pengfei Hong, Siqi Shen, Navonil Majumder, Rada Mihalcea, and Soujanya Poria. 2021. CIDER: Commonsense inference for dialogue explanation and reasoning. In Proceedings of the 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 301-313, Singapore and Online. Association for Computational Linguistics. +Saiping Guan, Xiaolong Jin, Yuzhuo Wang, and Xueqi Cheng. 2019. Link prediction on n-ary relational data. In The World Wide Web Conference, WWW '19, page 583-593, New York, NY, USA. Association for Computing Machinery. +Pankaj Gupta, Hinrich Schütze, and Bernt Andrassy. 2016. Table filling multi-task recurrent neural network for joint entity and relation extraction. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 2537-2547, Osaka, Japan. The COLING 2016 Organizing Committee. +Xu Han, Hao Zhu, Pengfei Yu, Ziyun Wang, Yuan Yao, Zhiyuan Liu, and Maosong Sun. 2018. FewRel: A large-scale supervised few-shot relation classification dataset with state-of-the-art evaluation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4803-4809, Brussels, Belgium. Association for Computational Linguistics. +Eduard Hovy, Roberto Navigli, and Simone Paolo Ponzetto. 2013. Collaboratively built semi-structured content and artificial intelligence: The story so far. Artificial Intelligence. +Pere-Lluis Huguet Cabot and Roberto Navigli. 2021. REBEL: Relation extraction by end-to-end language generation. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 2370-2381, Punta Cana, Dominican Republic. Association for Computational Linguistics. +Robin Jia, Cliff Wong, and Hoifung Poon. 2019. Document-level n-ary relation extraction with multiscale representation learning. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3693-3704, Minneapolis, Minnesota. Association for Computational Linguistics. +Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling laws for neural language models. CoRR, abs/2001.08361. +Douglas B. Lenat. 1995. Cyc: A large-scale investment in knowledge infrastructure. Commun. ACM. + +Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics. +Xiaoya Li, Xiaofei Sun, Yuxian Meng, Junjun Liang, Fei Wu, and Jiwei Li. 2020. Dice loss for data-imbalanced NLP tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 465-476, Online. Association for Computational Linguistics. +Colin Lockard, Xin Luna Dong, Arash Einolghozati, and Prashant Shiralkar. 2018. Ceres: Distantly supervised relation extraction from the semi-structured web. Proc. VLDB Endow., 11(10):1084-1096. +Colin Lockard, Prashant Shiralkar, Xin Luna Dong, and Hannaneh Hajishirzi. 2020. Web-Scale Knowledge Collection, page 888-889. Association for Computing Machinery, New York, NY, USA. +Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 55-60, Baltimore, Maryland. Association for Computational Linguistics. +Pablo N. Mendes, Max Jakob, Andrés García-Silva, and Christian Bizer. 2011. Dbpedia spotlight: Shedding light on the web of documents. In Proceedings of the 7th International Conference on Semantic Systems, I-Semantics '11, page 1-8, New York, NY, USA. Association for Computing Machinery. +Mike Mintz, Steven Bills, Rion Snow, and Daniel Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 1003-1011, Suntec, Singapore. Association for Computational Linguistics. +Makoto Miwa and Yutaka Sasaki. 2014. Modeling joint entity and relation extraction with table representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1858-1869, Doha, Qatar. Association for Computational Linguistics. +Tapas Nayak, Navonil Majumder, and Soujanya Poria. 2021. Improving distantly supervised relation extraction with self-ensemble noise filtering. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021), pages 1031-1039, Held Online. INCOMA Ltd. + +Tapas Nayak and Hwee Tou Ng. 2020. Effective modeling of encoder-decoder architecture for joint entity and relation extraction. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05):8528-8535. +Maximilian Nickel, Kevin Murphy, Volker Tresp, and Evgeniy Gabrilovich. 2016. A review of relational machine learning for knowledge graphs. Proceedings of the IEEE, 104(1):11-33. +Yixin Nie, Haonan Chen, and Mohit Bansal. 2019. Combining fact extraction and verification with neural semantic matching networks. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01):6859-6866. +Giovanni Paolini, Ben Athiwaratkun, Jason Krone, Jie Ma, Alessandro Achille, RISHITA ANUBHAI, Cicero Nogueira dos Santos, Bing Xiang, and Stefano Soatto. 2021. Structured prediction as translation between augmented natural languages. In International Conference on Learning Representations. +Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions without labeled text. In *Machine Learning and Knowledge Discovery in Databases*, pages 148–163, Berlin, Heidelberg. Springer Berlin Heidelberg. +Paolo Rosso, Dingqi Yang, and Philippe CudreMaurous. 2020. Beyond Triplets: Hyper-Relational Knowledge Graph Embedding for Link Prediction, page 1885-1896. Association for Computing Machinery, New York, NY, USA. +Victor Sanh, Lysandre Debut, Julien Chaumont, and Thomas Wolf. 2019. Distilbert, a distilled version of BERT: smaller, faster, cheaper and lighter. CoRR, abs/1910.01108. +Chenhui Shen, Liying Cheng, Ran Zhou, Lidong Bing, Yang You, and Luo Si. 2022. MReD: A meta-review dataset for structure-controllable text generation. In *Findings of the Association for Computational Linguistics: ACL* 2022, pages 2521-2535, Dublin, Ireland. Association for Computational Linguistics. +Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems, volume 27. Curran Associates, Inc. +Qingyu Tan, Ruidan He, Lidong Bing, and Hwee Tou Ng. 2022a. Document-level relation extraction with adaptive focal loss and knowledge distillation. In *Findings of the Association for Computational Linguistics: ACL* 2022, pages 1672-1681, Dublin, Ireland. Association for Computational Linguistics. +Qingyu Tan, Lu Xu, Lidong Bing, and Hwee Tou Ng. 2022b. Revisiting docred - addressing the overlooked false negative problem in relation extraction. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP). + +James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. FEVER: a large-scale dataset for fact extraction and VERIFICATION. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 809-819, New Orleans, Louisiana. Association for Computational Linguistics. +Siddharth Vashishtha, Adam Poliak, Yash Kumar Lal, Benjamin Van Durme, and Aaron Steven White. 2020. Temporal reasoning in natural language inference. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4070-4078, Online. Association for Computational Linguistics. +Jue Wang and Wei Lu. 2020. Two are better than one: Joint entity and relation extraction with table-sequence encoders. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1706-1721, Online. Association for Computational Linguistics. +Quan Wang, Zhendong Mao, Bin Wang, and Li Guo. 2017. Knowledge graph embedding: A survey of approaches and applications. IEEE Transactions on Knowledge and Data Engineering, 29(12):2724-2743. +Yijun Wang, Changzhi Sun, Yuanbin Wu, Hao Zhou, Lei Li, and Junchi Yan. 2021a. UniRE: A unified label space for entity relation extraction. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 220-231, Online. Association for Computational Linguistics. +Ziqi Wang, Xiaozhi Wang, Xu Han, Yankai Lin, Lei Hou, Zhiyuan Liu, Peng Li, Juanzi Li, and Jie Zhou. 2021b. CLEVE: Contrastive Pre-training for Event Extraction. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6283-6297, Online. Association for Computational Linguistics. +Sean Welleck, Jason Weston, Arthur Szlam, and Kyunghyun Cho. 2019. Dialogue natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3731-3741, Florence, Italy. Association for Computational Linguistics. +Jianfeng Wen, Jianxin Li, Yongyi Mao, Shini Chen, and Richong Zhang. 2016. On the representation and embedding of knowledge bases beyond binary relations. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI'16, page 1300-1307. AAAI Press. +Chenyan Xiong, Russell Power, and Jamie Callan. 2017. Explicit semantic ranking for academic search via + +knowledge graph embedding. In Proceedings of the 26th International Conference on World Wide Web, WWW '17, page 1271-1279, Republic and Canton of Geneva, CHE. International World Wide Web Conferences Steering Committee. + +Lu Xu, Yew Ken Chia, and Lidong Bing. 2021. Learning span-level interactions for aspect sentiment triplet extraction. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4755-4766, Online. Association for Computational Linguistics. + +Yuan Yao, Jiaju Du, Yankai Lin, Peng Li, Zhiyuan Liu, Jie Zhou, and Maosong Sun. 2021. CodRED: A cross-document relation extraction dataset for acquiring knowledge in the wild. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 4452-4472, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. + +Yuan Yao, Deming Ye, Peng Li, Xu Han, Yankai Lin, Zhenghao Liu, Zhiyuan Liu, Lixin Huang, Jie Zhou, and Maosong Sun. 2019. DocRED: A large-scale document-level relation extraction dataset. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 764-777, Florence, Italy. Association for Computational Linguistics. + +Wen-tau Yih and Hao Ma. 2016. Question answering with knowledge base, web and beyond. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Tutorial Abstracts, pages 8-10, San Diego, California. Association for Computational Linguistics. + +Samson Yu Bai Jian, Tapas Nayak, Navonil Majumder, and Soujanya Poria. 2021. Aspect sentiment triplet extraction using reinforcement learning. In Proceedings of the 30th ACM International Conference on Information and Knowledge Management, CIKM '21, page 3603-3607, New York, NY, USA. Association for Computing Machinery. + +Ningyu Zhang, Shumin Deng, Zhanlin Sun, Guanying Wang, Xi Chen, Wei Zhang, and Huajun Chen. 2019. Long-tail relation extraction via knowledge graph embeddings and graph convolution networks. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3016-3025, Minneapolis, Minnesota. Association for Computational Linguistics. + +Yuhao Zhang, Victor Zhong, Danqi Chen, Gabor Angeli, and Christopher D. Manning. 2017. Position-aware attention and supervised data improve slot filling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages + +35-45, Copenhagen, Denmark. Association for Computational Linguistics. + +Zexuan Zhong and Danqi Chen. 2021. A frustratingly easy approach for entity and relation extraction. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 50-61, Online. Association for Computational Linguistics. + +# A Annotation Guide + +This section explains the guideline for human annotators. The task is a classification of whether each hyper-relational fact can be reasonably extracted from a piece of text. Each annotation sample contains one sentence and one corresponding fact for judgment. The annotator should classify each sample as "Correct" or "Invalid Triplet" or "Invalid Qualifier". Each hyper-relational fact has five components with the format (head entity, relation label, tail entity, qualifier label, value entity). The head entity is the main subject entity of the relationship. The relation label is the category of relationship that is expressed between the head and tail entity. The tail entity is the object entity of the relationship that is paired with the head entity. The qualifier label is the category of the qualifier information. The value entity is the corresponding value of the qualifier that is applied to the relation triplet (head, relation, tail). + +The value entity can contain a date, quantity, or short piece of text which is the mentioned name of the entity. For the annotation objective, we want to know whether this piece of information is clearly expressed by the given text. All the entities, relations, and qualifiers exist in the Wikidata database, so annotators can refer to the relation or qualifier definition at https://www.wikidata.org for clarification. The annotation steps are as follows: + +1. Read and understand the text sample which is a continuous sequence of words. Then, consider the corresponding hyper-relational fact. +2. First check the triplet (head, relation, tail) of the fact. If the head and tail entity mentioned in the text do not clearly express the relation's meaning, then the whole fact should be marked as "Invalid Triplet". +3. Check the (qualifier, value) components. If the value mentioned in the text does not clearly express the qualifier meaning or is not directly + +
Data SettingAnnotation TypeSentencesFactsEntitiesAverage Sentence LengthAverage Entity Length
TrainDistant-Supervised39,84039,97832,53931.91 words1.67 words
DevHuman Annotated1,0001,2201,91230.30 words1.71 words
TestHuman Annotated4,0004,7965,84230.06 words1.69 words
+ +Table 5: Detailed statistics for the HyperRED dataset. + +related to the triplet, then the fact should be marked as "Invalid Qualifier". + +4. If there is no error in the fact, then it can be marked as "Correct". + +For example, given the sentence "The film's story earned Leonard Spigelgass a nomination as Best Story for the 23rd Academy Awards.", the fact (Leonard Spigelgass, nominated for, Best Story, statement is subject of, 23rd Academy Awards) is correct as Leonard was nominated and the main topic is the Academy Awards. However, given the sentence "Prince Koreyasu was the son of Prince Munetaka who was the sixth shogun.", the fact (Prince Koreyasu, occupation, shogun, replaces, Prince Munetaka) has an invalid triplet as we don't know if Koreyasu became a shogun. On the other hand, given the sentence "Robin Johns left Northamptonshire at the end of the 1971 season.", the fact (Robin Johns, member of sports team, Northamptonshire, Start Time, 1971) has an invalid qualifier as the qualifier label should be "End Time" instead of "Start Time". + +# B Experiment Details + +Hyperparameters Table 8 shows the details of our experimental setup and model hyperparameters. For the analysis experiments in Section 5, we use the BERT-Base version of CubeRE and report the $F_{1}$ metric score on the development set of HyperRED unless otherwise stated in the specific subsection. + +Pipeline Baseline Details For the pipeline baseline, we use DistilBERT as the language model encoder for both the triplet extraction and conditional qualifier extraction stages. Both stages of the pipeline are fine-tuned separately on the gold labels. At inference time, the triplet extraction stage takes the sentence as input and outputs the predicted relation triplets. For each predicted relation triplet, the conditional qualifier extractor takes the sentence and the relation triplet as input to predict the possible qualifiers where each qualifier consists of the + +qualifier label and value entity. The input of the qualifier extraction model is the concatenated sentence and relation triplet. For example, the sentence "Leonard Parker received his PhD from Harvard University in 1967." and relation triplet (Leonard Parker, Educated At, Harvard University), will be concatenated to become 'Leonard Parker received his PhD from Harvard University in 1967. Leonard Parker | Educated At | Harvard University". The outputs of both stages are then merged to form the predicted hyper-relational facts. Following the BERT-Tagger, the conditional qualifier extraction model is trained using the crossentropy loss for sequence labeling. To encode the qualifier information as sequence labels, we use the BIO tagging scheme where the sequence label corresponds to the possible qualifier label for each entity word. For both stages which are trained separately, we use the same epochs, learning rate and batch size as the CubeRE model for fairness. + +Generative Baseline Details The generative baseline model can predict hyper-relational facts by learning to generate a text sequence with a special structured format as demonstrated in Section 4.2. Note that if the sentence contains multiple hyperrelational facts, the desired output sequence is simply the concatenated text sequence of the structured text for each fact. The multiple facts can be easily decoded from the structured text format with simple text processing such as regex. As the input and output of the model are text sequences which do not violate the model vocabulary, the generative baseline can be trained using a standard sequence-to-sequence modeling objective. For training, we use the same epochs, learning rate and batch size as the CubeRE model for fairness. + +# C Dataset Details + +Dataset Statistics Table 5 shows the detailed statistics of HyperRED, such as the number of unique facts and entities, as well as the average number of words in each sentence. Table 9 and Table 10 show the set of relation and qualifier labels respectively. For the construction of the + +
ModelParametersDevTest
PrecisionRecall\(F_1\)PrecisionRecall\(F_1\)
Generative Baseline140M63.79 ± 0.2759.94 ± 0.6861.80 ± 0.3764.60 ± 0.4759.67 ± 0.3562.03 ± 0.21
Pipeline Baseline132M69.23 ± 0.3058.21 ± 0.5763.24 ± 0.4469.00 ± 0.4857.55 ± 0.1962.75 ± 0.29
CubeRE115M66.14 ± 0.8864.39 ± 1.2365.24 ± 0.8265.82 ± 0.8464.28 ± 0.2565.04 ± 0.29
Pipeline Baseline (Medium)221M69.70 ± 1.0862.33 ± 0.5065.80 ± 0.5469.38 ± 0.3961.96 ± 0.5465.46 ± 0.32
Generative Baseline (Large)400M67.08 ± 0.4965.73 ± 0.7866.40 ± 0.4767.17 ± 0.4064.56 ± 0.5865.84 ± 0.25
CubeRE (Large)343M68.75 ± 0.8268.88 ± 1.0368.81 ± 0.4666.39 ± 0.9667.12 ± 0.6966.75 ± 0.65
Pipeline Baseline (Large)680M70.58 ± 0.7866.58 ± 0.6668.52 ± 0.3269.21 ± 0.5564.27 ± 0.2466.65 ± 0.28
+ +Table 6: Evaluation results for hyper-relational extraction on the HyperRED dataset. + +
ModelTraining TimeInference SpeedMemory Usage
Generative1.93 hrs37 samples/s3.9 GB
Pipeline2.41 hrs181 samples/s5.5 GB
CubeRE3.08 hrs160 samples/s6.6 GB
+ +Table 7: Comparison of the computational cost for the Generative, Pipeline and CubeRE models. + +
Experimental Detail
GPU ModelNvidia V100
CUDA Version11.3
Python Version3.7.12
PyTorch Version1.11.0
Wikidata Version20170503
Long-Tailed Threshold10
Pruning Threshold20
Maximum Sequence Length (words)80
FFN Hidden Size150
Learning Rate Decay0.9
Adam Epsilon1e-12
Adam Weight Decay Rate1e-5
+ +Table 8: List of experimental details. + +![](images/3a556a46c80866bfa1f4a6e7807d2da98a9fc03f7d1b29fb42dc9f571acfea73.jpg) +Figure 5: Histogram distribution of number of relation labels covered by each qualifier label. + +dataset, we use the Wikidata which has 594,088 hyper-relational facts and introductions from English Wikipedia which has 4,650,000 articles. + +Distant Supervision Example In this section, we demonstrate the distant supervision process for fact alignment with a sentence example. Given the input sentence "Leonard Parker received his PhD from Harvard University in 1967", we first + +perform entity linking which detects the entity mentions and their Wikipedia IDs: { (Leonard Parker, Q3271532), (PhD, Q752297), (Harvard University, Q13371)}. As the entity linker does not consider dates or numbers, we use the spaCy tool to extract such spans: {(1967, Date)}. Hence, the set of linked entities in the sentence is { (Leonard Parker, Q3271532), (PhD, Q752297), (Harvard University, Q13371), (1967, Date)}. To address the case if the sentence contains unresolved pronouns such as "he" or "she", we use the Stanford CoreNLP tool to detect and resolve such cases to a suitable entity in the set of linked entities above. For each hyperrelational fact the in Wikidata knowledge graph, we attempt to align it to the sentence based on the entities in the fact. If the head entity, tail entity and value entity are all present in the linked entities set of the sentence, then it is a successful alignment. For example, given the fact (Leonard Parker, Educated At, Harvard University, End Time, 1967) where the head entity, tail entity and value entity is (Leonard Parker, Q3271532), (Harvard University, Q13371) and (1967, Date) respectively, the fact is successfully aligned with the sentence as the three entities are present in the set of linked entities. If any entities are missing from the set of linked entities, the alignment is unsuccessful and we do not include it in the dataset. If any sentence does not have any successfully aligned facts, we do not include it in the dataset. + +Annotation Challenges The human annotation of the dataset may be imperfect due to complexity of the hyper-relational fact structure, diversity of relation and qualifier labels, and possible ambiguous facts. The hyper-relational facts require annotators to joint consider the relation triplet and qualifier which is more challenging compared to previous datasets which commonly consider the relation between two entities. On the other hand, the annotators are also required to consider the definitions of a large set of relation and qualifier + +labels. This may pose difficult when some relations or qualifiers are similar in meaning. Lastly, there may be ambiguous cases where multiple entities are mentioned in relation to a topic and it is not clear which entity is the main subject. + +Relation-Specific Qualifiers To investigate the link between relation triplets and qualifiers, we plot a histogram distribution in Figure 5. A majority (32) of the qualifier labels are each linked to a small number of relation labels (1-5), which suggests that most qualifiers are highly relation-specific. For example, the "electoral district" qualifier label is only linked to the "candidacy in election" and "position held" relation labels. On the other hand, a few (3) qualifier labels are each linked to a large number $(16+)$ of relation labels, and not specific to any particular relation. For example, the "end time" qualifier is linked to 35 relation labels. Hence, it is generally important to consider the interaction between relation triplets and qualifiers in extracting hyper-relational facts. However, it is not trivial to predict the qualifier only based on the relation, as some qualifier labels are relation-agnostic and it also requires the model to consider the value entity. + +# D Decoding Algorithm + +Algorithm 1: Pseudocode of our decoding algorithm in a PyTorch-like style. +# y_t: Input entity-relation scores (Eq.3) +# y_q: Input qualifier scores (Eq.5) +facts = [] # Output hyper-relational facts groups = [] # Hyper-relational span groups +# Find and merge adjacent non-null entries for i, j, k in y_q.argmax(-1).nonzero(): entry = (i, i+1, j, j+1, k, k+1) for spans in groups: if is_adjacent(spans, entry): merge(spans, entry) break else: groups.append(entry) +# Aggregate relation and qualifier scores for spans in groups: i, i2, j, j2, k, k2 = spans rScores = y_t[i: i2, j: j2] r_label = r Scores.mean(0, 1).argmax() qScores = y_q[i: i2, j: j2, k: k2] q_label = q Scores.mean(0, 1, 2).argmax() facts.append((spans, r_label, q_label)) + +We include the pseudocode algorithm of the proposed decoding method in Algorithm 1. Note that we can use the nonzero operation to find and merge + +![](images/cef8b5179136d0769c53166947163990b75a1340eb255a8ae17606329263feaf.jpg) +Figure 6: The effect of training data size on Dev $F_{1}$ . The training set of HyperRED is distantly supervised, while the development and test set are human-annotated. + +adjacent non-null entries as it returns the entries sorted in lexicographic order. This ensures that the order of entries seen in consecutive order if they correspond to the same hyper-relational fact. + +# E Model Costs + +Table 7 shows a comparison of total training time, inference speed in samples per second and GPU memory usage for the different models. We observe that CubeRE has a comparable computational cost with the generative and pipeline models. This result that our cube-pruning method is effective in ensuring that the model is computationally efficient and practical in real applications. Note that we compute the statistics for the two-stage pipeline model by summing the time taken and memory used by both stages. + +# F Further Analysis + +Additional Pipeline Results For a fair comparison of main results in Section 4.3, we do not include the pipeline baseline in the large model setting as it would have 680M parameters which is much more than the other models. On the other hand, we also do not include a BERT-Base version of the pipeline baseline in the main results, as it would have 221M parameters which is not comparable to both the base and large model settings. Hence, we only include the pipeline baseline using DistilBERT in the main result discussion as it has a comparable parameter count to the base model setting. However, we include the pipeline baseline with BERT-Base in Table 6 for reference. + +Effect of Pruning The main effect of cube-pruning is to reduce the sparsity of the cube entries by retaining the entries which are most likely to be valid entities. To quantify the effect on sparsity, we measure the cube without pruning to consist of $99.9900\%$ null entries on average. When using + +pruning threshold $m = 20$ , the cube consists of $99.9098\%$ null entries on average. Hence, there is a roughly tenfold increase in the proportion of non-null entries when using pruning. + +Effect of Training Data Size The HyperRED training set consists of distantly supervised data which enables large-scale and diverse model training. However, there may be noisy samples that affect the model performance. Hence, we aim to study whether the quantity of data can overcome noise in the training set. As shown in Figure 6, we observe a strictly increasing trend when the size of the training set is increased from $20\%$ of the original size to $100\%$ of the original size. Thus, the results suggest that the quantity of data is still a beneficial factor for model performance despite some noise in the distantly supervised training set. + +
Wiki IDLabelDescription
P6head of governmenthead of the executive power of this town, city, municipality, state, country, or other governmental body
P17countrysovereign state of this item (not to be used for human beings)
P19place of birthmost specific known (e.g. city instead of country, or hospital instead of city) birth location of a person, animal or fictional cha racter
P26spousethe subject has the object as their spouse (husband, wife, partner, etc.). Use "unmarried partner" (P451) for non-married companions
P27country of citizenshipthe object is a country that recognizes the subject as its citizen
P31instance ofthat class of which this subject is a particular example and member
P35head of stateofficial with the highest formal authority in a country/state
P39position heldsubject currently or formerly holds the object position or public office
P40childsubject has object as child. Do not use for stepchildren
P47shares border withcountries or administrative subdivisions, of equal level, that this item borders, either by land or water. A single common point is enough.
P54member of sports teamsports teams or clubs that the subject represents or represented
P69educated ateducational institution attended by subject
P81connecting linerailway line(s) subject is directly connected to
P97noble titletitles held by the person
P102member of political partythe political party of which a person is or has been a member or otherwise affiliated
P106occupationoccupation of a person; see also "field of work" (Property:P101), "position held" (Property:P39)
P108employerperson or organization for which the subject works or worked
P115home venuehome stadium or venue of a sports team or applicable performing arts organization
P118leagueleague in which team or player plays or has played in
P127owned byowner of the subject
P131located in the administrative territorial entitythe item is located on the territory of the following administrative entity.
P137operatorperson, profession, or organization that operates the equipment, facility, or service
P156followed byimmediately following item in a series of which the subject is a part
P159headquarters locationcity, where an organization's headquarters is or has been situated. Use P276 qualifier for specific building
P161cast memberactor in the subject production
P166award receivedaward or recognition received by a person, organisation or creative work
P175performeractor, musician, band or other performer associated with this role or musical work
P176manufacturermanufacturer or producer of this product
P179part of the seriesseries which contains the subject
P194legislative bodylegislative body governing this entity; political institution with elected representatives, such as a parliament/legislature or council
P197adjacent stationthe stations next to this station, sharing the same line(s)
P241military branchbranch to which this military unit, award, office, or person belongs, e.g. Royal Navy
P276locationlocation of the object, structure or event. In the case of an administrative entity as containing item use P131.
P279subclass ofnext higher class or type; all instances of these items are instances of those items; this item is a class (subset) of that item.
P361part ofobject of which the subject is a part
P414stock exchangeexchange on which this company is traded
P449original broadcasternetwork(s) or service(s) that originally broadcasted a radio or television program
P463member oforganization, club or musical group to which the subject belongs. Do not use for membership in ethnic or social groups
P466occupantperson or organization occupying property
P488chairpersonpresiding member of an organization, group or body
P551residencethe place where the person is or has been, resident
P641sportsport that the subject participates or participated in or is associated with
P669located on streetstreet, road, or square, where the item is located.
P710participantperson, group of people or organization (object) that actively takes/took part in an event or process (subject).
P725voice actorperformer of a spoken role in a creative work such as animation, video game, radio drama, or dubbing over
P749parent organizationparent organization of an organization, opposite of subsidiaries (P355)
P793significant eventsignificant or notable events associated with the subject
P800notable worknotable scientific, artistic or literary work, or other work of significance among subject's works
P1037director / managerperson who manages any kind of group
P1327partner in business or sportprofessional collaborator
P1346winnerwinner of a competition or similar event, not to be used for awards
P1365replacesperson, state or item replaced. Use "structure replaces" (P1398) for structures.
P1376capital ofcountry, state, department, canton or other administrative division of which the municipality is the governmental seat
P1411nominated foraward nomination received by a person, organisation or creative work (inspired from "award received" (Property:P166))
P1441present in workthis (fictional or fictionalized) entity or person appears in that work as part of the narration
P1535used byitem or concept that makes use of the subject (use sub-properties when appropriate)
P1923participating teamlike 'Participant' (P710) but for teams. For an event like a cycle race or a football match you can use this property to list the teams
P3450sports season ofproperty that shows the competition of which the item is a season. Use P5138 for "season of club or team".
league or competition
P3602candidacy in electionelection where the subject is a candidate
P3701incarnation ofincarnation of another religious or supernatural being
P5800narrative rolenarrative role of this character (should be used as a qualifier with P674 or restricted to a certain work using P642)
P6087coach of sports teamsports club or team for which this person is or was on-field manager or coach
+ +Table 9: List of relation labels in HyperRED. + +
Wiki IDLabelDescription
P17countrysovereign state of this item (not to be used for human beings)
P25motherfemale parent of the subject. For stepmother, use "stepparent" (P3448)
P31instance ofthat class of which this subject is a particular example and member
P39position heldsubject currently or formerly holds the object position or public office
P81connecting linerailway line(s) subject is directly connected to
P102member of political partythe political party of which a person is or has been a member or otherwise affiliated
P131located in the administrative territorial entitythe item is located on the territory of the following administrative entity.
P155followsimmediately prior item in a series of which the subject is a part, preferably use as qualifier of P179
P175performeractor, musician, band or other performer associated with this role or musical work
P197adjacent stationthe stations next to this station, sharing the same line(s)
P249ticket symbolidentifier for a publicly traded share of a particular stock on a particular stock market or that of a cryptocurrency
P276locationlocation of the object, structure or event. In the case of an administrative entity as containing item use P131.
P413position played on team / specialityposition or specialism of a player on a team
P453character rolespecific role played or filled by subject – use only as qualifier of "cast member" (P161), "voice actor" (P725)
P512academic degreeacademic degree that the person holds
P518applies to partpart, aspect, or form of the item to which the claim applies
P527has partpart of this subject; inverse property of "part of" (P361). See also "has parts of the class" (P2670).
P577publication datedate or point in time when a work was first published or released
P580start timetime an event starts, an item begins to exist, or a statement becomes valid
P582end timetime an item ceases to exist or a statement stops being valid
P585point in timetime and date something took place, existed or a statement was true
P642ofqualifier stating that a statement applies within the scope of a particular item
P670street numbernumber in the street address. To be used as a qualifier of Property:P669 "located on street"
P708dioceseadministrative division of the church to which the element belongs
P768electoral districtelectoral district this person is representing, or of the office that is being contested.
P805statement is subject of(qualifying) item that describes the relation identified in this statement
P812academic majormajor someone studied at college/university
P1114quantitynumber of instances of this subject
P1129national team appearancestotal number of games officially played by a sportsman for national team
P1310statement disputed byentity that disputes a given statement
P1346winnerwinner of a competition or similar event, not to be used for awards
P1350number of matches played/races/startsmatches or games a player or a team played during an event.
P1352rankingsubject's numbered position within a competition or group of performers
P1365replacesperson, state or item replaced. Use "structure replaces" (P1398) for structures.
P1416affiliationorganization that a person or organization is affiliated with (not necessarily member of or employed by)
P1545series ordinalposition of an item in its parent series (most frequently a 1-based index), generally to be used as a qualifier
P1686for workqualifier of award received (P166) to specify the work that an award was given to the creator for
P1706together withqualifier to specify the item that this property is shared with
P2453nomineequalifier used with «nominated for» to specify which person or organization was nominated
P2868subject has rolerole/generic identity of the item ("subject"), also in the context of a statement.
P3831object has role(qualifier) role or generic identity of the value of a statement ("object") in the context of that statement
P3983sports league levelthe level of the sport league in the sport league system
P5051towardsqualifier for "adjacent station" (P197) to indicate the terminal station(s) of a transportation line or service in that direction
+ +Table 10: List of qualifier labels in HyperRED. \ No newline at end of file diff --git a/adatasetforhyperrelationalextractionandacubefillingapproach/images.zip b/adatasetforhyperrelationalextractionandacubefillingapproach/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..a8e69be1919117d1bef1c2f1ac485514f14d0336 --- /dev/null +++ b/adatasetforhyperrelationalextractionandacubefillingapproach/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dd1ca8706fc705def9aaecd710691378f3d98db1f84ec78632d4e9e1c798e394 +size 1037792 diff --git a/adatasetforhyperrelationalextractionandacubefillingapproach/layout.json b/adatasetforhyperrelationalextractionandacubefillingapproach/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..d316337ddc9619cd367ecd2fc9c00c4bd80abc63 --- /dev/null +++ b/adatasetforhyperrelationalextractionandacubefillingapproach/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b2ca2218842ef33299dbd1ba41ab8e89b9076a724cc9ef54ab903f9ffc850ab7 +size 502355 diff --git a/addmudetectionoffarboundaryadversarialexampleswithdataandmodeluncertaintyestimation/5d3ba387-569d-4bd7-8d2e-4ce1848467c7_content_list.json b/addmudetectionoffarboundaryadversarialexampleswithdataandmodeluncertaintyestimation/5d3ba387-569d-4bd7-8d2e-4ce1848467c7_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..bfc159978630196fbfb83661c57a4958957d89b7 --- /dev/null +++ b/addmudetectionoffarboundaryadversarialexampleswithdataandmodeluncertaintyestimation/5d3ba387-569d-4bd7-8d2e-4ce1848467c7_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d68fe6c1cb5fde2c82364cb848d39d88fd93039ab1d02b6d74ad54c3d068271c +size 129912 diff --git a/addmudetectionoffarboundaryadversarialexampleswithdataandmodeluncertaintyestimation/5d3ba387-569d-4bd7-8d2e-4ce1848467c7_model.json b/addmudetectionoffarboundaryadversarialexampleswithdataandmodeluncertaintyestimation/5d3ba387-569d-4bd7-8d2e-4ce1848467c7_model.json new file mode 100644 index 0000000000000000000000000000000000000000..5f3ef417ca3648de13902f1cb837a1e2eee770c9 --- /dev/null +++ b/addmudetectionoffarboundaryadversarialexampleswithdataandmodeluncertaintyestimation/5d3ba387-569d-4bd7-8d2e-4ce1848467c7_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:924f235da342984892a7eb604975f9367266864edc626e3d884ded7d7fc44498 +size 149091 diff --git a/addmudetectionoffarboundaryadversarialexampleswithdataandmodeluncertaintyestimation/5d3ba387-569d-4bd7-8d2e-4ce1848467c7_origin.pdf b/addmudetectionoffarboundaryadversarialexampleswithdataandmodeluncertaintyestimation/5d3ba387-569d-4bd7-8d2e-4ce1848467c7_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..f8b5af8e74d1714ef7e56d230f029d66ff20fd25 --- /dev/null +++ b/addmudetectionoffarboundaryadversarialexampleswithdataandmodeluncertaintyestimation/5d3ba387-569d-4bd7-8d2e-4ce1848467c7_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:64d1c6f8065115d4629366230fc6addb8dc4a7093e90accb6bc800db15ccc5e9 +size 558711 diff --git a/addmudetectionoffarboundaryadversarialexampleswithdataandmodeluncertaintyestimation/full.md b/addmudetectionoffarboundaryadversarialexampleswithdataandmodeluncertaintyestimation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..43897a129b7db776a71d51cb3bee1ecb88188f1b --- /dev/null +++ b/addmudetectionoffarboundaryadversarialexampleswithdataandmodeluncertaintyestimation/full.md @@ -0,0 +1,391 @@ +# ADDMU: Detection of Far-Boundary Adversarial Examples with Data and Model Uncertainty Estimation + +Fan Yin + +University of California, Los Angeles fanyin20@cs.ucla.edu + +Yao Li + +University of North Carolina, Chapel Hill yaoli@email.unc.edu + +Cho-Jui Hsieh + +University of California, Los Angeles +chohsieh@cs.ucla.edu + +Kai-Wei Chang + +University of California, Los Angeles kwchang@cs.ucla.edu + +# Abstract + +Adversarial Examples Detection (AED) is a crucial defense technique against adversarial attacks and has drawn increasing attention from the Natural Language Processing (NLP) community. Despite the surge of new AED methods, our studies show that existing methods heavily rely on a shortcut to achieve good performance. In other words, current search-based adversarial attacks in NLP stop once model predictions change, and thus most adversarial examples generated by those attacks are located near model decision boundaries. To surpass this shortcut and fairly evaluate AED methods, we propose to test AED methods with Far Boundary (FB) adversarial examples. Existing methods show worse than random guess performance under this scenario. To overcome this limitation, we propose a new technique, ADDMU, adversary detection with data and model uncertainty, which combines two types of uncertainty estimation for both regular and FB adversarial example detection. Our new method outperforms previous methods by 3.6 and 6.0 AUC points under each scenario. Finally, our analysis shows that the two types of uncertainty provided by ADDMU can be leveraged to characterize adversarial examples and identify the ones that contribute most to model's robustness in adversarial training. + +# 1 Introduction + +Deep neural networks (DNN) have achieved remarkable performance in a wide variety of NLP tasks. However, it has been shown that DNNs can be vulnerable to adversarial examples (Jia and Liang, 2017; Alzantot et al., 2018; Jin et al., 2020), i.e., perturbed examples that flip model predictions but remain imperceptible to humans, and thus impose serious security concerns about NLP models. + +To improve the robustness of NLP models, different kinds of techniques to defend against adversarial examples have been proposed (Li et al., 2021b). In this paper, we study AED, which aims to add a + +detection module to identify and reject malicious inputs based on certain characteristics. Different from adversarial training methods (Madry et al., 2018a; Jia et al., 2019) which require re-training of the model with additional data or regularization, AED operates in the test time and can be directly integrated with any existing model. + +Despite being well explored in the vision domain (Feinman et al., 2017; Raghuram et al., 2021), AED started to get attention in the field of NLP only recently. Many works have been proposed to conduct detection based on certain statistics (Zhou et al., 2019; Mozes et al., 2021; Yoo et al., 2022; Xie et al., 2022). Specifically, Yoo et al. (2022) propose a benchmark for AED methods and a competitive baseline by robust density estimation. However, by studying examples in the benchmark, we find that the success of some AED methods relies heavily on the shortcut left by adversarial attacks: most adversarial examples are located near model decision boundaries, i.e., they have small probability discrepancy between the predicted class and the second largest class. This is because when creating adversarial data, the searching process stops once model predictions changed. We illustrate this finding in Section 2.2. + +To evaluate detection methods accurately, we propose to test AED methods on both regular adversarial examples and Far-Boundary $(\mathbf{FB})^{1}$ adversarial examples, which are created by continuing to search for better adversarial examples till a threshold of probability discrepancy is met. Results show that existing AED methods perform worse than random guess on FB adversarial examples. Yoo et al. (2022) recognize this limitation, but we find that this phenomenon is more severe than what is reported in their work. Thus, an AED method that works for FB attacks is in need. + +We propose ADDMU, an uncertainty estimation based AED method. The key intuition is based on the fact that adversarial examples lie off the manifold of training data and models are typically uncertain about their predictions of them. Thus, although the prediction probability is no longer a good uncertainty measurement when adversarial examples are far from the model decision boundary, there exist other statistical clues that give out the 'uncertainty' in predictions to identify adversarial data. In this paper, we introduce two of them: data uncertainty and model uncertainty. Data uncertainty is defined as the uncertainty of model predictions over neighbors of the input. Model uncertainty is defined as the prediction variance on the original input when applying Monte Carlo Dropout (MCD) (Gal and Ghahramani, 2016) to the target model during inference time. Previous work has shown that models trained with dropout regularization (Srivastava et al., 2014) approximate the inference in Bayesian neural networks with MCD, where model uncertainty is easy to obtain (Gal and Ghahramani, 2016; Smith and Gal, 2018). Given the statistics of the two uncertainties, we apply p-value normalization (Raghuram et al., 2021) and combine them with Fisher's method (Fisher, 1992) to produce a stronger test statistic for AED. To the best of our knowledge, we are the first work to estimate the uncertainty of Transformer-based models (Shelmanov et al., 2021) for AED. + +The advantages of our proposed AED method include: 1) it only operates on the output level of the model; 2) it requires little to no modifications to adapt to different architectures; 3) it provides an unified way to combine different types of uncertainties. Experimental results on four datasets, four attacks, and two models demonstrate that our method outperforms existing methods by 3.6 and 6.0 in terms of AUC scores on regular and FB cases, respectively. We also show that the two uncertainty statistics can be used to characterize adversarial data and select useful data for another defense technique, adversarial data augmentation (ADA). + +The code for this paper could be found at https://github.com/uclanlp/AdvExDetection-ADDMU + +# 2 A Diagnostic Study on AED Methods + +In this section, we first describe the formulation of adversarial examples and AED. Then, we show that current AED methods mainly act well on detecting + +adversarial examples near the decision boundary, but are confused by FB adversarial examples. + +# 2.1 Formulation + +Adversarial Examples. Given an NLP model $f: \mathcal{X} \to \mathcal{Y}$ , a textual input $x \in \mathcal{X}$ , a predicted class from the candidate classes $y \in \mathcal{Y}$ , and a set of boolean indicator functions of constraints, $\mathcal{C}_i: \mathcal{X} \times \mathcal{X} \to \{0,1\}$ , $i = 1,2,\dots,n$ . An (untargeted) adversarial example $x^{*} \in \mathcal{X}$ satisfies: + +$$ +f \left(x ^ {*}\right) \neq f (x), \mathcal {C} _ {i} \left(x, x ^ {*}\right) = 1, i = 1, 2, \dots , n. +$$ + +Constraints are typically grammatical or semantic similarities between original and adversarial data. For example, Jin et al. (2020) conduct part-of-speech checks and use Universal Sentence Encoder (Cer et al., 2018) to ensure semantic similarities between two sentences. + +Adversarial Examples Detection (AED) The task of AED is to distinguish adversarial examples from natural ones, based on certain characteristics of adversarial data. We assume access to 1) the victim model $f$ , trained and tested on clean datasets $D_{train}$ and $D_{test}$ ; 2) an evaluation set $D_{eval}$ ; 3) an auxiliary dataset $D_{aux}$ contains only clean data. $D_{eval}$ contains equal number of adversarial examples $D_{eval - adv}$ and natural examples $D_{eval - nat}$ . $D_{eval - nat}$ are randomly sampled from $D_{test}$ . $D_{eval - adv}$ is generated by attacking a disjoint set of samples from $D_{eval - nat}$ on $D_{test}$ . See Scenario 1 in Yoo et al. (2022) for details. We use a subset of $D_{train}$ as $D_{aux}$ . We adopt an unsupervised setting, i.e., the AED method is not trained on any dataset that contains adversarial examples. + +# 2.2 Diagnose AED Methods + +We define examples near model decision boundaries to be those whose output probabilities for the predicted class and the second largest class are close. Regular iterative adversarial attacks stop once the predictions are changed. Therefore, we suspect that regular attacks are mostly generating adversarial examples near the boundaries, and existing AED methods could rely on this property to detect adversarial examples. + +Figure 1 verifies this for the state-of-the-art unsupervised AED method (Yoo et al., 2022) in NLP, denoted as RDE. Similar trends are observed for another baseline. The X-axis shows two attack methods: TextFooler (Jin et al., 2020) and Pruthi (Pruthi et al., 2019). The Y-axis represents the probability + +![](images/023aed0c33e1b435ba7a05462c6731ebf619f3936957d76df1a092af7f406b6e.jpg) +Figure 1: The probability difference between the predicted class and the second largest class on natural examples, adversarial examples that the detector failed, succeed, and in total. The X-axis is the attack. The Y-axis is the difference. Correctly detected adversarial examples have relatively small probability difference. + +![](images/ddd82e302d665e6339741af06be5c91cebf2f7a326079b6184dab6cc9e1cff83.jpg) + +
Data-attackRDEDIST
RegularFBRegularFB
SST2-TF72.8/86.545.0/81.573.4/87.926.3/81.6
SST2-Pruthi55.1/80.630.8/72.661.4/85.326.5/74.6
Yelp-TF79.2/89.644.6/82.780.3/90.664.3/86.2
Yelp-Pruthi64.8/88.047.9/85.272.2/89.255.2/84.9
+ +difference between the predicted class and the second largest class. Average probability differences of natural examples (Natural), and three types of adversarial examples are shown: RDE fails to identify (Failed), successfully detected (Detected), and overall (Overall). There is a clear trend that successfully detected adversarial examples are those with small probability differences while the ones with high probability differences are often mis-classified as natural examples. This finding shows that these AED methods identify examples near the decision boundaries, instead of adversarial examples. + +To better evaluate AED methods, we propose to avoid the above shortcut by testing detection methods with FB adversarial examples, which are generated by continuously searching for adversarial examples until a prediction probability threshold is reached. We simply add another goal function to the adversarial example definition to achieve this while keep other conditions unchanged: + +$$ +\begin{array}{l} f \left(x ^ {*}\right) \neq f (x), p \left(y = f \left(x ^ {*}\right) \mid x ^ {*}\right) \geq \epsilon \\ \mathcal {C} _ {i} \left(x, x ^ {*}\right) = 1, i = 1, 2, \dots , n. \\ \end{array} +$$ + +$p\left(y = f\left(x^{*}\right)\mid x^{*}\right)$ denotes the predicted probability for the adversarial example. $\epsilon$ is a manually defined threshold. We illustrate the choice of $\epsilon$ in + +Table 1: F1/AUC scores of two SOTA detection methods on Regular and FB adversarial examples. RDE and DIST perform worse than random guess (F1=50.0) on FB adversarial examples. + +
DataGrammarSemantics
RegularFBRegularFB
SST-21.1171.1293.9603.900
Yelp1.2091.2334.1134.082
+ +Table 2: Quality checks for FB adversarial examples. The results on each dataset are averaged over examples from three attacks: TextFooler, BAE, Pruthi, and their FB versions. The numbers for Grammar columns are the relative increases of errors of perturbed examples w.r.t. original examples. The numbers for Semantics columns are the averaged rates that the adversarial examples preserve the original meaning evaluated by humans. The quality of adversarial examples do not degrade much with the FB version of attacks. + +Section 4.1. In Table 1, it shows that the existing competitive methods (RDE and DIST) get lower than random guess F1 scores when evaluated with FB adversarial examples. + +# 2.3 Quality Check for FB Attacks + +We show that empirically, the quality of adversarial examples do not significantly degrade even searching for more steps and stronger FB adversarial examples. We follow Morris et al. (2020a) to evaluate the quality of FB adversarial examples in terms of grammatical and semantic changes, and compare them with regular adversarial examples. We use a triple $(x, x_{adv}, x_{FB - adv})$ to denote the original example, its corresponding regular adversarial and FB adversarial examples. For grammatical changes, we conduct an automatic evaluation with Language-Tool (Naber et al., 2003) to count grammatical errors and report the relative increase of errors of perturbed examples w.r.t. original examples. For semantic changes, we do a human evaluation using Amazon MTurk $^{2}$ . We ask the workers to rate to what extent the changes to $x$ preserve the meaning of the sentence, with scale 1 ('Strongly disagree') to 5 ('Strongly agree'). Results are summarized in Table 2. The values are averaged over three adversarial attacks, 50 examples for each. We find that the FB attacks have minimal impact on the quality of the adversarial examples. We show some examples on Table 7, which qualitatively demonstrate that it is hard for humans to identify FB adversarial examples. + +# 3 Adversary Detection with Data and Model Uncertainty (ADDMU) + +Given the poor performance of previous methods on FB attacks, we aim to build a detector that can handle not only regular but also FB adversarial examples. We propose ADDMU, an uncertainty estimation based AED method by combing two types of uncertainty: model uncertainty and data uncertainty. We expect the adversarial examples to have large values for both. The motivation of using uncertainty is that models can still be uncertain about their predictions even when they assign a high probability of predicted class to an example. We describe the definitions and estimations of the two uncertainties, and how to combine them. + +# 3.1 Model Uncertainty Estimation + +Model uncertainty represents the uncertainty when predicting a single data point with randomized models. Gal and Ghahramani (2016) show that model uncertainty can be extracted from DNNs trained with dropout and inference with MCD without any modifications of the network. This is because the training objective with dropout minimizes the Kullback-Leibler divergence between the posterior distribution of a Bayesian network and an approximation distribution. We follow this approach and define the model uncertainty as the softmax variance when applying MCD during test time. + +Specifically, given a trained model $f$ , we do $N_{m}$ stochastic forward passes for each data point $x$ . The dropout masks of hidden representations for each forward pass are i.i.d sampled from a Bernolli distribution, i.e., $z_{lk} \sim \text{Bernolli}(p_m)$ where $p_m$ is a fixed dropout rate for all layers, $z_{lk}$ is the mask for neuron $k$ on layer $l$ . Then, we can do a Monte Carlo estimation on the softmax variance among the $N_{m}$ stochastic softmax outputs. Denote the probability of predicting the input as the $i$ -th class in the $j$ -th forward pass as $p_{ij}$ and the mean probability for the $i$ -th class over $N_{m}$ passes as $\bar{p}_i = \frac{1}{N_m} \sum_{j=1}^{N_m} p_{ij}$ , the model uncertainty (MU) can be computed by + +$$ +M U (x) = \frac {1}{| \mathcal {Y} |} \sum_ {i = 1} ^ {| \mathcal {Y} |} \frac {1}{N _ {m}} \sum_ {j = 1} ^ {N _ {m}} (p _ {i j} - \bar {p} _ {i}) ^ {2}. +$$ + +# 3.2 Data Uncertainty Estimation + +Data uncertainty quantifies the predictive probability distribution of a fixed model over the neighborhood of an input point. + +Specifically, similar to the model uncertainty estimation, we do $N_{d}$ stochastic forward passes. + +But instead of randomly zeroing out neurons in the model, we fix the trained model and construct a stochastic input for each forward pass by masking out input tokens, i.e., replacing each token in the original input by a special token with probability $p_d$ . The data uncertainty is estimated by the mean of $(1 -$ maximum softmax probability) over the $N_{d}$ forward passes. Denote the $N_{d}$ stochastic inputs as $x_{1},x_{2},\dots ,x_{N_{d}}$ , the original prediction as $y$ , and the predictive probability of the original predicted class as $p_y(\cdot)$ , the Monte Carlo estimation on data uncertainty (DU) is: + +$$ +D U (x) = \frac {1}{N _ {d}} \sum_ {i = 1} ^ {N _ {d}} \left(1 - p _ {y} (x _ {i})\right). +$$ + +# 3.3 Aggregate Uncertainties with Fisher's Method + +We intend to aggregate the two uncertainties described above to better reveal the low confidence of model's prediction on adversarial examples. We first normalize the uncertainty statistics so that they follow the same distribution. Motivated by Raghuram et al. (2021) where the authors normalize test statistics across layers by converting them to p-values, we also adopt the same method to normalize the two uncertainties. By definition, a p-value computes the probability of a test statistic being at least as extreme as the target value. The transformation will convert any test statistics into a uniformly distributed probability. We construct empirical distributions for MU and DU by calculating the corresponding uncertainties for each example on the auxiliary dataset $\mathcal{D}_{aux}$ , denoted as $T_{mu}$ , and $T_{du}$ . Following the null hypothesis $H_0$ : the data being evaluated comes from the clean distribution, we can calculate the p-values based on model uncertainty $(q_m)$ and data uncertainty $(q_d)$ by: + +$$ +q _ {m} (x) = \mathbb {P} \left(T _ {m u} \geq M U (x) \mid H _ {0}\right), +$$ + +$$ +q _ {d} \left(x\right) = \mathbb {P} \left(T _ {d u} \geq D U \left(x\right) \mid H _ {0}\right). +$$ + +The smaller the values $q_{m}$ and $q_{d}$ , the higher the probability of the example being adversarial. + +Given $q_{m}$ and $q_{d}$ , we combine them into a single p-value using the Fisher's method to do combined probability test (Fisher, 1992). Fisher's method indicates that under the null hypothesis, the sum of the log of the two p-values follows a $\chi^2$ distribution with 4 degrees of freedom. We use $q_{agg}$ to denote the aggregated p-value. Adversarial examples should have smaller $q_{agg}$ , where $\log q_{agg} = \log q_{m} + \log q_{d}$ . + +# 4 Experiments + +We first describe the experimental setup (Section 4.1), then present our results on both regular and FB AED (Section 4.2). Results show that our ADDMU outperforms existing methods by a large margin under both scenarios. + +# 4.1 Experimental Setup + +Datasets and victim models. We conduct experiments on classification tasks in different domains, including sentiment analysis SST-2 (Socher et al., 2013), Yelp (Zhang et al., 2015), topic classification AGNews (Zhang et al., 2015), and natural language inference SNLI (Bowman et al., 2015). We generate both regular and FB adversarial examples on the test data of each dataset with two word-level attacks: TextFooer (TF) (Jin et al., 2020), BAE (Garg and Ramakrishnan, 2020), and two character-level attacks: Pruthi (Pruthi et al., 2019), and TextBugger (TB) (Li et al., 2019). We only consider the examples that are predicted correctly before attacks. The numbers of evaluated examples vary among 400 to 4000 across datasets. See Appendix B. For FB adversarial examples, we choose the $\epsilon$ so that adversarial examples have approximately equal averaged prediction probability with natural data. Specifically, $\epsilon = 0.9$ for SST-2, Yelp, AGNews, and $\epsilon = 0.7$ for SNLI. We mainly experiment with two Transformer-based victim models, BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019) as they are widely adopted in the current NLP pipelines and show superior performance than other architectures. More details are presented in Appendix B. In Appendix H, we also present some simple experiments with BiLSTM. + +Baselines. We compare ADDMU with several unsupervised AED methods. 1) MSP: Hendrycks and Gimpel (2017) use the Maximum Softmax Probability (MSP) for detection; 2) PPL: GPT-2 large (Radford et al., 2019) as a language model to measure the perplexity of the input; 3) FGWS: Mozes et al. (2021) measure the difference in prediction probability after replacing infrequent words of the inputs with frequent words and find that adversarial examples have higher performance change; 4) RDE: Yoo et al. (2022) fit class conditional density estimation with Kernel PCA (Schölkopf et al., 1998) and Minimum Covariance Determinant (Rousseeuw, 1984) in the feature space and use the density scores; 5) DIST: we propose a distance-based baseline that uses the difference be + +tween class conditional, averaged K nearest distances. See Appendix C for details. + +Unsupervised AED methods assign a score to each evaluated data. Then, a threshold is selected based on the maximum False Positive Rate (FPR) allowed, i.e., the rate of mis-classified natural data. + +Implementation Details. For FGWS and RDE, we follow the hyper-parameters in their papers to reproduce the numbers. For DIST and ADDMU, we attack the validation set and use those examples to tune the hyper-parameters. See Appendix D for details. Specifically, for DIST, we use 600 neighbors. For ADDMU, we find $N_{m} = 10$ , $p_{m} = 0.2$ for MU works well for all datasets. For DU, we find that it is beneficial to ensemble different mask rates for text classification tasks, we set $N_{d} = 100$ in total, and 25 for each $p_{d} \in \{0.1, 0.2, 0.3, 0.4\}$ for all the text classification tasks, $N_{d} = 25$ , $p_{d} = 0.1$ for SNLI. + +Metrics. In the main experiments, we select the threshold at maximum FPR=0.1. A lower FPR represents a more practical case where only a small proportion of natural samples are mis-classified as adversarial samples. Following the setup in Xu et al. (2018) and Yoo et al. (2022), we report True Positive Rate (TPR), i.e., the fraction of the real adversarial examples out of predicted adversarial examples, and F1 score at FPR=0.1, and Area Under the ROC curve (AUC), which measures the area under the TPR and FPR curve. For all the metrics, the higher the better. + +# 4.2 Results + +Performances of AED methods on BERT are presented in Table 3. We average the results among three runs with different random seeds. See Appendix F for the results on RoBERTa. + +Detector performance. Our proposed ADDMU achieves the best performance on both regular and FB adversarial examples under the three metrics (TPR, F1, AUC) on the four datasets, which demonstrates the effectiveness of ADDMU. Further, ADDMU preserves more than $90\%$ of the performance or even achieves better results, e.g SST-2-Pruthi and Yelp-BAE, under FB adversarial attacks, which shows that ADDMU is not affected by FB attacks. + +The performances of MSP, DIST, and RDE are severely degraded under FB attacks. This demonstrates that those methods can be fooled and circumvented by carefully designed attacks. Under regular attacks, the performances of RDE and DIST are + +
SST-2AGNewsYelpSNLI
AttacksMethodsTPRF1AUCTPRF1AUCTPRF1AUCTPRF1AUC
TFPPL31.244.272.476.181.891.145.758.879.340.253.678.0
FGWS62.972.876.583.086.385.567.172.780.648.555.472.2
MSP64.073.688.095.292.897.573.980.490.656.768.083.6
RDE62.972.886.596.093.297.072.079.289.646.359.381.0
DIST64.073.487.994.592.495.973.880.390.637.250.474.5
ADDMU67.175.888.899.294.998.678.783.591.668.977.089.7
TF-FBPPL41.955.280.683.386.393.949.962.481.644.157.279.2
FGWS61.872.077.984.887.188.172.278.089.452.159.678.4
MSP31.244.281.982.085.491.566.075.087.126.839.275.1
RDE31.945.081.571.979.192.531.544.682.743.156.479.6
DIST20.726.381.666.675.491.854.864.386.227.239.669.9
ADDMU62.072.288.097.594.097.872.879.789.753.665.887.5
BAEPPL19.730.466.230.944.071.823.635.370.124.836.868.1
FGWS37.651.064.264.774.272.554.966.768.031.244.067.9
MSP45.158.379.096.093.496.068.376.789.541.454.771.4
RDE44.257.379.396.493.796.365.274.589.141.755.076.8
DIST44.957.378.994.291.996.268.076.289.436.849.767.9
ADDMU45.958.982.396.493.597.372.579.590.148.261.081.0
BAE-FBPPL26.038.270.545.558.779.628.541.373.024.937.067.9
FGWS20.431.457.172.679.678.251.964.365.932.947.563.4
MSP12.821.170.479.283.891.269.177.288.318.328.662.6
RDE19.530.272.568.877.091.266.475.488.134.647.974.0
DIST17.726.170.164.968.191.469.777.388.429.542.362.9
ADDMU51.464.184.683.785.994.176.381.990.634.948.476.0
PruthiPPL29.742.971.931.044.070.735.348.772.954.966.685.5
MSP53.265.282.675.781.991.565.474.788.722.533.969.2
RDE41.455.180.677.482.892.452.664.888.034.647.876.5
DIST55.061.482.977.882.092.166.772.288.223.635.265.1
ADDMU55.967.485.496.793.997.478.883.791.855.767.186.0
Pruthi-FBPPL28.641.672.327.840.471.637.350.873.337.250.676.3
MSP31.144.473.849.462.284.551.563.985.410.217.064.5
RDE20.030.872.659.570.487.634.347.985.231.244.274.9
DIST23.326.574.655.161.687.254.555.284.921.632.863.3
ADDMU56.268.785.880.484.995.068.777.090.744.958.082.5
TBPPL30.843.776.174.080.590.356.968.284.456.067.584.3
MSP72.379.090.595.693.097.370.478.189.866.475.189.0
RDE72.479.689.696.193.396.966.275.289.251.864.183.0
DIST72.478.690.695.692.896.270.277.990.250.762.782.6
ADDMU73.380.090.999.094.898.470.878.391.069.077.190.6
TB-FBPPL36.049.480.282.986.094.260.671.185.848.961.676.3
MSP34.848.283.081.184.991.270.077.888.434.748.081.5
RDE29.542.582.168.977.191.763.973.588.447.860.682.2
DIST34.344.082.663.472.991.569.877.689.340.853.979.0
ADDMU50.562.986.194.292.696.974.881.090.851.163.687.0
+ +Table 3: Detection performance of regular and FB adversarial examples (*-FB) against BERT on SST-2, AGNews, Yelp, and SNLI. Our proposed ADDMU outperforms other methods by a large margin, especially on FB adversarial examples. We occlude FGWS under character-level attacks, Pruthi and TextBugger, as it is designed for word-level detection. The best performance is bolded. Results are averaged over three runs with different random seeds. + +worse than the baseline MSP in most cases, which simply uses the maximum softmax probability for detection. One explanation is that those class conditional methods are just approximating softmax probabilities so might not be as effective as MSP in detecting near the decision boundary examples. + +Finally, PPL and FGWS are also not severely affected by FB attacks. However, FGWS is only applicable to word-level attacks. Also, PPL and FGWS are not effective enough in general. + +Ablation study. Data uncertainty (DU) and model uncertainty (MU) can also be used as features in + +detection separately. Also, both RDE and DIST can be enhanced by calculating the average score over the neighborhood of the input using the same random masking technique as used in data uncertainty estimation. We denote them as RDE-aug and DIST-aug. In this part, we study the effectiveness of uncertainty aggregation and neighbor augmentation by comparing ADDMU with DU and MU, and by comparing RDE and DIST with RDE-aug and DIST-aug. Full results are shown in Appendix G. We show a representative proportion of the results in Table 4. The summary of findings + +
AGNewsSNLI
MethodTPRF1AUCTPRF1AUC
TERDE96.093.297.046.359.381.0
RDE-aug97.494.097.441.054.379.9
DIST94.592.495.937.250.474.5
DIST-aug94.092.096.938.351.575.2
MU82.085.494.565.174.489.1
DU98.994.698.359.670.385.6
ADDMU99.294.998.668.977.089.7
+ +are discussed in the following. + +We find that ADDMU, the aggregation of two uncertainties, achieves the best results in 70 out of the 96 metric scores. DU and MU are the best in 12 scores each. This shows that the combination of the two uncertainties provides more information to identify adversarial examples. We also observe that on SNLI, DU values are typically less useful, and thus the combination of DU and MU performs slightly worse than MU. One explanation is that the SNLI task requires more sophisticated neighborhood construction method to generate meaningful neighbors in data uncertainty estimation. Finally, we also notice that RDE-aug and DIST-aug are in general better than RDE and DIST, especially under FB attacks, which demonstrates the effectiveness of neighbor augmentation. + +Why do detection results vary among datasets and attacks? Among different attacks, we find that Pruthi is the hardest to detect, followed by BAE. However, there is no obvious difference between detection performances against word-level and character-level attacks. Also, attacks on the sentence pair task (SNLI) are in general harder to detect. Thus, future work could focus more on improving the performance of detecting adversarial examples in sentence pair tasks, like SNLI. + +We investigate why the detection performances vary among attacks. Our hypothesis is that attacks on some datasets fail to be imperceptible and have changed the groundtruth label for an input. Thus, these 'adversarial' (can not be called adversarial any more as they do not meet the definition of being imperceptible) examples actually lie close to the training manifold of the target class. Therefore, AED methods find it hard to detect those examples. To verify this assumption, we choose two tasks (SST-2 and Yelp) and two attacks (TF and BAE) to do sentiment analysis. We ask Amazon + +Table 4: Ablation study on effect of uncertainty aggregation and neighbors augmentation against TextFooler. + +
F1CorrectWrong
SST-2 TF75.80.1290.360
SST-2 BAE58.90.1360.597
Yelp TF83.50.2110.411
Yelp BAE79.50.2290.425
BAE attack on SST-2, ADDMU fails to detect
Groundtruth Label changed: Positive → Negative
Original +AttackedMost new movies have a bright sheen. +Most new movies have a bad sheen.
+ +Table 5: Why detector performance varies among attacks? This might because attacks already flip groundtruth labels of the examples. We show the detector performance (F1) and the proportion of adversarial examples that have their sentiments changed according to humans on correctly and wrongly detected sets. + +MTurk workers $^{3}$ to re-label positive or negative for attacked examples. Then, we summarize the proportion of examples that workers assign opposite groundtruth labels in correctly and wrongly detected groups. As shown in Table 5, there is an obvious correlation between bad performance and the number of 'adversarial' examples whose groundtruth labels changed. For example, AD-DMU performs weak on detecting BAE attacks on SST-2 (58.9 F1), but it turns out that this is because more than half of the examples already have their groundtruth labels flipped. We give one example in Table 5. This shows that adversarial attacks need to be improved to retain the semantic meaning of the original input. + +# 5 Characterize Adversarial Examples + +In this section, we explore how to characterize adversarial examples by the two uncertainties. + +MU-DU Data Map Plotting a heatmap with MU on X-axis and DU on Y-axis, we visualize data in terms of the two uncertainties. We show in Figure 2 the heatmaps with natural data, FB and regular adversarial examples generated from three attacks on three datasets (AGNews TF, Yelp BAE, SNLI Pruthi). The performance of ADDMU varies on the three attacks, as shown on the left of Figure 2. + +We find that natural examples center on the bottom left corner of the map, representing low MU and DU values. This phenomenon does not vary across datasets. Whereas for FB and regular adversarial examples, they have larger values on at least one of the two uncertainties. When ADDMU performs best (AGNews TF, the first row), the cen + +![](images/157248bbc5858b553d4dffa69c17f4f0a008692b0956147dd2798f78950031c9.jpg) +Figure 2: MU-DU heatmaps based on natural and regular/FB adversarial examples generated from three attacks. X-axis: MU value; Y-axis: DU value. Attack types and ADDMU performance are labeled on the left. TPR: Regular Adv./FB Adv. + +ter of adversarial examples in the MU-DU map is relatively rightward and upward compared to other cases. For maps on the third row, the shadow stretches along the MU axis, indicating that Pruthi examples on SNLI have relatively large MU values. + +Identifying Informative ADA Data ADA is another adversarial defense technique, which augments the training set with adversarial data and re-train the victim model to improve its robustness. In this part, we show that our ADDMU provides information to select adversarial data that is more beneficial to model robustness. We test it with TF on SST-2. The procedure is as follows: since SST-2 only has public training and validation sets, we split the original training set into training (80%) and validation set (20%), and use the original validation set as test set. We first train a model on the new training set. Then, we attack the model on validation data and compute DU and MU values for each adversarial sample. We sort the adversarial examples according to their DU and MU values and split them by half into four disjoint sets: HDHM (high DU, high MU), HDLM (high DU, low MU), LDHM (low DU, high MU), and LDLM (low DU, low MU). We augment the clean training set with each of these sets and retrain the model. As a baseline, we also test the performance of augmenting with all the adversarial examples generated from the validation set (All). We report clean accuracy (Clean %), the number of augmented data (#Aug), attack success rate (ASR), and the average query number (#Query) for each model. + +
SST-2 TFClean %#AugASR#Query
BERT92.8094.31%98.51
+ All92.41119987.36%66.31
+ LDLM91.7280090.62%108.06
+ HDLM92.4280088.59%111.26
+ LDHM92.9279985.05%115.30
+ HDHM91.9279987.07%119.92
+ +Table 6: ADA performances of different types of augmented data. We find that adversarial examples with low DU and high MU are most useful for ADA. + +The results are in Table 6. We find that the most helpful adversarial examples are with low $DU$ and high $MU$ . Using those samples, we achieve better ASR and clean accuracy than augmenting with the whole validation set of adversarial examples, with only one quarter of the amount of data. It is expected that examples with low $DU$ and low $MU$ are less helpful as they are more similar to the clean data. Similar observations are found in the FB version of TF attacks. We also compare augmentations with regular and FB adversarial examples. See details in Appendix E. + +# 6 Related Work + +Adversarial Detection. Adversarial examples detection has been well-studied in the image domain (Feinman et al., 2017; Lee et al., 2018; Ma et al., 2018; Xu et al., 2018; Roth et al., 2019; Li et al., 2021a; Raghuram et al., 2021). Our work aligns with Feinman et al. (2017); Li et al. (2021a); Roth et al. (2019) that introduce uncertainty estimation or perturbations as features to detect adversarial examples. We postpone the details to Appendix I, but focus more on the AED in NLP domain. + +In the NLP domain, there are less work exploring AED. Zhou et al. (2019) propose DISP that learns a BERT-based discriminator to defend against adversarial examples. Mozes et al. (2021) propose a word-level detector FGWS that leverages the model confidence drop when replacing infrequent words in the input with frequent ones and surpass DISP. Pruthi et al. (2019) combat character-level attacks with word-recognition models. More recently, Yoo et al. (2022) propose a robust density estimation baseline and a benchmark for evaluating AED methods. There are other works like Xie et al. (2022); Biju et al. (2022); Wang et al. (2022); Mosca et al. (2022), that leverage other features or train a detector. We show limitations of these works on FB adversarial examples and propose our ADDMU that overcomes this limitation. + +Other Defenses against Attacks. AED is a category of approaches to defending against adversarial attacks. Other methods are also considered. Jin et al. (2020); Yin et al. (2020); Si et al. (2021) do ADA that augments original training datasets with adversarial data for better robustness. Madry et al. (2018b); Miyato et al. (2017); Zhu et al. (2019); Zhou et al. (2020) conduct adversarial training which is formulated as a min-max problem. Recently, several works perform certified robustness defense with either interval bound propagation (Huang et al., 2019; Jia et al., 2019; Shi et al., 2020), or randomized smoothness (Ye et al., 2020). In this work, we connect our AED method with ADA by selecting more informative data to augment. + +# 7 Conclusion + +We proposed ADDMU, an uncertainty-based approach for both regular and FB AED. We began by showing that existing methods are significantly affected by FB attacks. Then, we show that ADDMU is minimally impacted by FB attacks and outperforms existing methods by a large margin. We further showed ADDMU characterizes adversarial data and provides information on how to select useful augmented data for improving robustness. + +# Acknowledgement + +We thank anonymous reviewers, UCLA PLUS-Lab and UCLA-NLP for their helpful feedback. This work is partially supported by DMS-2152289, DMS-2134107, IIS-2008173, IIS-2048280, Cisco Faculty Award, and a Sloan Research Fellow. + +# Limitations + +We summarize the limitations of this paper in this section. + +1. We only test the AED methods under classification tasks. This is because we find that the attacks on other tasks like language generation are not well-defined, for example what would be the goal function of attacks on a language generation task? Is minimizing the BLEU score sufficient? It is hard to conduct detection when there is no standard for a valid adversarial example. Future work might come up with attacks for diverse tasks first and propose corresponding AED methods. + +2. More experiments should be conducted to analyze the FB adversarial examples, including + +its characteristics and the security concerns it imposes to DNNs. However, given the time and space limitations, we are not able to do that. + +3. Our method has slightly more hyperparameters to tune (four in total), and requires a bit more time to finish one detection. But, we confirm that it is in an acceptable range. + +# References + +Moustafa Alzantot, Yash Sharma, Ahmed Elghohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang. 2018. Generating natural language adversarial examples. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2890-2896. +Anish Athalye, Nicholas Carlini, and David Wagner. 2018. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In International conference on machine learning, pages 274-283. PMLR. +Emil Biju, Anirudh Sriram, Pratyush Kumar, and Mitesh Khapra. 2022. Input-specific attention subnetworks for adversarial detection. In *Findings of the Association for Computational Linguistics: ACL* 2022, pages 31-44, Dublin, Ireland. Association for Computational Linguistics. +Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632-642, Lisbon, Portugal. Association for Computational Linguistics. +Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Brian Strope, and Ray Kurzweil. 2018. Universal sentence encoder for English. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 169-174. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186. +Reuben Feinman, Ryan R. Curtin, Saurabh Shintre, and Andrew B. Gardner. 2017. Detecting adversarial samples from artifacts. *ArXiv*, abs/1703.00410. + +Ronald Aylmer Fisher. 1992. Statistical methods for research workers. In *Breakthroughs in statistics*, pages 66-70. Springer. +Yarin Gal and Zoubin Ghahramani. 2016. Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. In Proceedings of the 33rd International Conference on Machine Learning (ICML-16). +Siddhant Garg and Goutham Ramakrishnan. 2020. BAE: BERT-based adversarial examples for text classification. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6174-6181. +Dan Hendrycks and Kevin Gimpel. 2017. A baseline for detecting misclassified and out-of-distribution examples in neural networks. In International Conference on Learning Representations. +Po-Sen Huang, Robert Stanforth, Johannes Welbl, Chris Dyer, Dani Yogatama, Sven Gowal, Krishnamurthy Dvijotham, and Pushmeet Kohli. 2019. Achieving verified robustness to symbol substitutions via interval bound propagation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4083-4093. +Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2021-2031. +Robin Jia, Aditi Raghunathan, Kerem Goksel, and Percy Liang. 2019. Certified robustness to adversarial word substitutions. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4129-4142. +Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2020. Is bert really robust? a strong baseline for natural language attack on text classification and entailment. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pages 8018-8025. +Kimin Lee, Kibok Lee, Honglak Lee, and Jinwoo Shin. 2018. A simple unified framework for detecting out-of-distribution samples and adversarial attacks. Advances in neural information processing systems, 31. +Quentin Lhoest, Albert Villanova del Moral, Yacine Jernite, Abhishek Thakur, Patrick von Platen, Suraj Patil, Julien Chaumont, Mariama Drame, Julien Plu, Lewis Tunstall, Joe Davison, Mario Šaško, Gunjan Chhablani, Bhavitvya Malik, Simon Brandeis, Teven Le Scao, Victor Sanh, Canwen Xu, Nicolas Patry, Angelina McMillan-Major, Philipp Schmid, + +Sylvain Gugger, Clément Delangue, Théo Matussière, Lysandre Debut, Stas Bekman, Pierrick Cistac, Thibault Goehringer, Victor Mustar, François Lagunas, Alexander Rush, and Thomas Wolf. 2021. Datasets: A community library for natural language processing. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 175-184, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. +Jinfeng Li, Shouling Ji, Tianyu Du, Bo Li, and Ting Wang. 2019. Textbugger: Generating adversarial text against real-world applications. ArXiv, abs/1812.05271. +Yao Li, Tongyi Tang, Cho-Jui Hsieh, and Thomas C. M. Lee. 2021a. Detecting adversarial examples with bayesian neural network. ArXiv, abs/2105.08620. +Zongyi Li, Jianhan Xu, Jiehang Zeng, Linyang Li, Xiaoqing Zheng, Qi Zhang, Kai-Wei Chang, and Cho-Jui Hsieh. 2021b. Searching for an effective defender: Benchmarking defense against adversarial word substitution. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3137-3147. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. ArXiv, abs/1907.11692. +Xingjun Ma, Bo Li, Yisen Wang, Sarah M Erfani, Sudanthi Wijewickrema, Grant Schoenebeck, Dawn Song, Michael E Houle, and James Bailey. 2018. Characterizing adversarial subspaces using local intrinsic dimensionality. In International Conference on Learning Representations. +Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2018a. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations. +Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2018b. Towards deep learning models resistant to adversarial attacks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net. +Takeru Miyato, Andrew M. Dai, and Ian J. Goodfellow. 2017. Adversarial training methods for semi-supervised text classification. arXiv: Machine Learning. +John Morris, Eli Lifland, Jack Lanchantin, Yangfeng Ji, and Yanjun Qi. 2020a. Reevaluating adversarial examples in natural language. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, pages 3829-3839, Online. Association for Computational Linguistics. + +John Morris, Jin Yong Yoo, and Yanjun Qi. 2020b. TextAttack: Lessons learned in designing python frameworks for NLP. In Proceedings of Second Workshop for NLP Open Source Software (NLP-OSS), pages 126-131. +Edoardo Mosca, Shreyash Agarwal, Javier Rando-Ramirez, and George Louis Groh. 2022. "that is a suspicious reaction!": Interpreting logits variation to detect nlp adversarial attacks. In ACL. +Maximilian Mozes, Pontus Stenetorp, Bennett Kleinberg, and Lewis Griffin. 2021. Frequency-guided word substitutions for detecting textual adversarial examples. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 171-186, Online. Association for Computational Linguistics. +Daniel Naber et al. 2003. A rule-based style and grammar checker. +Danish Pruthi, Bhuwan Dhingra, and Zachary C. Lipton. 2019. Combating adversarial misspellings with robust word recognition. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5582-5591. +Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9. +Jayaram Raghuram, Varun Chandrasekaran, Somesh Jha, and Suman Banerjee. 2021. A general framework for detecting anomalous inputs to dnn classifiers. In International Conference on Machine Learning, pages 8764-8775. PMLR. +Kevin Roth, Yannic Kilcher, and Thomas Hofmann. 2019. The odds are odd: A statistical test for detecting adversarial examples. In International Conference on Machine Learning, pages 5498-5507. PMLR. +Peter J. Rousseeuw. 1984. Least median of squares regression. Journal of the American Statistical Association, 79:871-880. +Bernhard Scholkopf, Alex Smola, and Klaus-Robert Müller. 1998. Nonlinear component analysis as a kernel eigenvalue problem. *Neural Computation*, 10:1299-1319. +Artem Shelmanov, Evgenii Tsymbalov, Dmitri Puzyrev, Kirill Fedyanin, Alexander Panchenko, and Maxim Panov. 2021. How certain is your transformer? In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1833-1840. +Zhouxing Shi, Huan Zhang, Kai-Wei Chang, Minlie Huang, and Cho-Jui Hsieh. 2020. Robustness verification for transformers. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. + +Chenglei Si, Zhengyan Zhang, Fanchao Qi, Zhiyuan Liu, Yasheng Wang, Qun Liu, and Maosong Sun. 2021. Better robustness by more coverage: Adversarial and mixup data augmentation for robust finetuning. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 1569-1576. +Lewis Smith and Yarin Gal. 2018. Understanding measures of uncertainty for adversarial example detection. ArXiv, abs/1803.08533. +Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631-1642. +Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res., 15:1929-1958. +Xiaosen Wang, Yifeng Xiong, and Kun He. 2022. Detecting textual adversarial examples through randomized substitution and vote. In UAI. +Zhouhang Xie, Jonathan Brophy, Adam Noack, Wencong You, Kalyani Asthana, Carter Perkins, Sabrina Reis, Sameer Singh, and Daniel Lowd. 2022. Identifying adversarial attacks on text classifiers. ArXiv, abs/2201.08555. +Weilin Xu, David Evans, and Yanjun Qi. 2018. Feature squeezing: Detecting adversarial examples in deep neural networks. ArXiv, abs/1704.01155. +Mao Ye, Chengyue Gong, and Qiang Liu. 2020. Safer: A structure-free approach for certified robustness to adversarial word substitutions. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3465-3475. +Fan Yin, Quanyu Long, Tao Meng, and Kai-Wei Chang. 2020. On the robustness of language encoders against grammatical errors. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3386-3403. +KiYoon Yoo, Jangho Kim, Jiho Jang, and Nojun Kwak. 2022. Detection of adversarial examples in text classification: Benchmark and baseline via robust density estimation. In Findings of the Association for Computational Linguistics: ACL 2022, pages 3656-3672, Dublin, Ireland. Association for Computational Linguistics. +Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Proceedings of the 28th International Conference on Neural Information Processing Systems-Volume 1, pages 649-657. + +Yi Zhou, Xiaqing Zheng, Cho-Jui Hsieh, Kai-Wei Chang, and Xuanjing Huang. 2020. Defense against adversarial attacks in nlp via dirichlet neighborhood ensemble. *ArXiv*, abs/2006.11627. +Yichao Zhou, Jyun-Yu Jiang, Kai-Wei Chang, and Wei Wang. 2019. Learning to discriminate perturbations for blocking adversarial attacks in text classification. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). +Chen Zhu, Yu Cheng, Zhe Gan, Siqi Sun, Tom Goldstein, and Jingjing Liu. 2019. Freelb: Enhanced adversarial training for natural language understanding. In International Conference on Learning Representations. + +# A Regular vs. FB Adversarial Examples + +In this section, we qualitatively shows some cases of far-boundary adversarial examples in Table 7. We show that it is hard for human beings to identify such far-boundary examples, which calls for an automatic way to do the detection. + +# B Experimental Setup Details + +# B.1 Datasets and Target Models + +We conduct experiments on four datasets, SST-2, Yelp-Polarity, AGNews, and SNLI. Statistics about those datasets are summarized on Table 8. All those datasets are available at Huggingface Datasets (Lhoest et al., 2021). Our target models are BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019). We use the public accessible BERT-base-uncased and RoBERTa-base models fine-tuned on the above datasets provided by TextAttack (Morris et al., 2020b) to benefit reproducibility. The performance of those models are summarized on Table 9. + +# B.2 Attacks and Statistics + +We consider four attacks. TextFooler (Jin et al., 2020), BAE (Garg and Ramakrishnan, 2020), Pruthi (Pruthi et al., 2019), and TextBugger (Li et al., 2019). TextFooler and BAE are word-level attacks. Pruthi and TextBugger are character-level attacks. For BAE, we use BAE-R, i.e., replace a word with a substitution. For attacks on SNLI, we only perturb the hypothesis sentence. For FB attacks, as stated in the main paper, we add another goal function to make sure the softmax probability of the attacked class is larger than a threshold $\epsilon$ . We select $\epsilon = 0.9$ for SST-2, Yelp, AGNews, and $\epsilon = 0.7$ for SNLI. We implement those attacks with TextAttack, with the default hyperparameter settings. Please refer to the document of TextAttack for details. Here we report the after-attack accuracy (Adv. Acc), the attack success rate (ASR), the number of queries (#Query), and the number of adversarial examples we select (#Adv) for each attack on each dataset, as well as for FB attacks. Notice, the total evaluated examples will be twice the number of adversarial examples. See Table 10 and Table 11. + +# C DIST + +We propose the DIST baseline, which is a distance-based detector motivated by Ma et al. (2018). We + +also find that the Local Intrinsic Dimension value proposed in Ma et al. (2018) does not work well when detecting NLP attacks. The DIST method leverages the whole training set as $\mathcal{D}_{aux}$ . Then, it selects the K-nearest neighbors of the evaluated point from each class of $\mathcal{D}_{aux}$ and calculates the average distance between the neighbors and the evaluated point, denote as $d_1, d_2, \dots, d_k$ , where $k$ is the number of classes. Suppose the evaluated point has predicted class $i$ . Then, it uses the difference between the distance of class $i$ and the minimum of the other classes to do detection. i.e., $d_i - \min(d_k)$ , where $k \neq i$ . The intuition is that since adversarial examples are generated from the original class, they might still be closer to training data in the original class, which is $\min(d_k)$ , $k \neq i$ . + +# D Implementation Details + +For DIST and ADDMU, we tune the hyperparameters with an attacked validation set. For datasets with an original train/validation/test split (SNLI), we simply attacked the examples in the validation set and select 100 of them to help the tuning. For datasets without an original split, like SST-2, Yelp, and AGNews, we randomly held out 100 examples from the training set and attack them to construct a set for hyperparameter tuning. For DIST, we select the number of the neighbors from $\{100, 200, \dots, 1000\}$ . For ADDMU, we select $N_{m}$ and $N_{d}$ from $\{10, 20, 80, 100\}$ , and choose $p_{m}$ and $p_{d}$ from $\{0.1, 0.2, 0.3, 0.4\}$ . In our preliminary experiment, we find that ensemble different $p_{d}$ values also help. So, we also consider ensemble different $p_{d}$ values in combinations $\{(0.1, 0.2), (0.1, 0.2, 0.3, 0.4)\}$ . We also find that augment the model uncertainty estimation with some neighborhood data is helpful, so for the model uncertainty value, we actually average over 10 neighborhood data with 0.1 mask rate. + +# E Selecting Useful data with Uncertainty values + +In this section, we present the results of selecting useful data for ADA using DU and MU values for FB version of TF, shown in Table 13. Similar to the regular version, we find that the most useful data are still those with low data uncertainty and high model uncertainty. We achieve better ASR and the number of queries using only one quarter of data compared to the full augmentation. In Table 14, we show the attack success rate of four settings. 1) + +
AttacksExamplesProb.
OriginalSeattle may have just won the 2014 Super Bowl, but the Steelers still [[rock]] with six rings, Baby!!! Just stating what all Steeler fans know: a Steel Dynasty is still unmatched no matter what team claims the title of current Super Bowl Champs.. Go Steelers!!!100%
RegularSeattle may have just won the 2014 Super Bowl, but the Steelers still [[trembles]] with six rings, Baby!!! Just stating what all Steeler fans know: a Steel Dynasty is still unmatched no matter what team claims the title of current Super Bowl Champs.. Go Steelers!!!57%
FBSeattle may have just won the 2014 Super Bowl, but the Steelers [[again]] [[trembles]] with six rings, Baby!!! Just stating what all Steeler fans know: a Steel Dynasty is still unmatched no matter what team claims the title of current Super Bowl Champs.. Go Steelers!!!95%
OriginalCisco invests $12 million in Japan R amp;D center On Thursday, the [[company]] announced it will invest $12 million over the next five years in a new research and development center in Tokyo.71%
RegularCisco invests $12 million in Japan R amp;D center On Thursday, the [[firm]] announced it will invest $12 million over the next five years in a new research and development center in Tokyo.63%
FBCisco invests $12 million in Japan R amp;D center On Thursday, the company [[mentioned]] it will invest $12 million over the next five [[decades]] in a new research and development [[centre]] in Tokyo.95%
OriginalKing Pong Draws Fans Spike TV's Video [[Game]] Awards Show attracts big-name celebrities and bands but gives the fans the votes.93%
RegularKing Pong Draws Fans Spike TV's [[tv]] Game Awards Show attracts big-name celebrities and bands but gives the fans the votes.57%
FBKing Pong Draws Fans Spike TV's Video [[play]] Awards Show attracts big-name celebrities and bands but gives the fans the votes.90%
+ +Table 7: Examples of the changes made by regular and far-boundary adversarial examples. The last column shows the prediction probability on the predicted class. We can see that it would still be hard for humans to identify the changes made by far boundary examples. It is necessary to propose an automatic way to detect far boundary adversarial examples. + +
DatasetTrain/Dev/TestAvgLen#Labels
SST-267.3k/0.8k/-19.22
Yelp560k/-/38k152.82
AGNews120k/-/7.6k35.54
SNLI550k/10k/20k8.33
+ +Table 8: Data Statistics of the four datasets. + +
DatasetSST-2YelpAGNewsSNLI
BERT92.4396.3094.2089.40
RoBERTa94.04-94.70-
+ +Table 9: BERT-base-uncased and RoBERTa-base accuracy on the four datasets. TextAttack does not have public model for RoBERTa fine-tuned on Yelp and SNLI. + +Augment with FB examples to defend against regular attack; 2) Augment with FB examples to defend against FB attack; 3) Augment with regular examples to defend against regular attack; 4) Augment with regular examples to defend against FB attack. The finding is that augment with FB and regular adversarial examples most benefits its own attacks. This implies that FB attacks might already change the characteristics of regular attacks. We need to defend against them with different strategies. + +# F RoBERTa Results + +We conduct adversarial examples detection with RoBERTa-base. The setting is the same as BERT. Through hyperparameters search as described before, for ADDMU, we select $N_{m} = 20$ and $N_{d} = 100$ , and choose $p_{m} = 0.1$ and $p_{d} = 0.1$ , without augmentation for MU estimation and no ensemble of various $p_{d}$ . Table 16 presents the results for RoBERTa-base. ADDMU also outperform other methods with RoBERTa. We combine the ablation table together with the main table for RoBERTa. + +
TextFoolerAdv. AccASR%#Query#Adv
SST-24.595.195.31290
Yelp6.093.8475.7738
AGNews17.781.4333.51625
SNLI3.096.758.52222
BAEAdv. AccASR%#Query#Adv
SST-238.358.960.8412
Yelp44.953.7319.91039
AGNews81.514.3122.5278
SNLI32.564.043.41605
PruthiAdv. AccASR%#Query#Adv
SST-259.236.0326.9111
Yelp86.411.51678.11036
AGNews84.511.1792.0239
SNLI23.274.4103.41846
TextBuggerAdv. AccASR%#Query#Adv
SST-228.968.749.3221
Yelp16.383.3350.1738
AGNews20.279.2123.41088
SNLI4.595.041.92225
+ +# G Ablation Study + +We present the full results for the ablation study of uncertainty aggregation in Table 15. We also show that our neighborhood construction process in data uncertainty can be used to enhance two baselines RDE and DIST. + +# H Preliminary Results on BiLSTM + +We experiment with a one-layer BiLSTM model with hidden dimension 150 and dropout 0.3. The model achieves 89.3 clean accuracy on SST-2. In our preliminary experiments, we test on detecting TextFollower and BAE attacks and their corresponding FB attacked examples. We compare our ADLMU detector with three baselines PPL, FGWS, and RDE. Results are shown on Table 12. We show that ADDMU still achieves the best performance, while the previous SOTA on detecting BERT and RoBERTa adversarial examples, RDE, is corrupted when detecting BiLSTM adversarial examples. + +# I Related work in CV + +Feinman et al. (2017) train a binary classifier using density estimation and Bayesian uncertainty estimation as features for detection. Li et al. (2021a) replace DNNs with Bayesian Neural Networks, which enhance the distribution dispersion between natural and adversarial examples and benefit AED. + +Table 10: Statistics about attacks. We report the adversarial accuracy (Adv. Acc), attack success rate (ASR%), the number of queries (#Query), and the number of adversarial examples examined. + +
TF-FBAdv. AccASR%#Query#Adv
SST-26.5494.8108.4295
Yelp6.293.7496.01027
AGNews22.077.4365.71604
SNLI8.391.469.62068
BAE-FBAdv. AccASR%#Query#Adv
SST-245.352.264.3164
Yelp50.248.8323.4333
AGNews87.69.7119.5202
SNLI46.851.344.51347
Pruthi-FBAdv. AccASR%#Query#Adv
SST-268.927.3326.490
Yelp89.49.11681.0134
AGNews89.87.4791.4158
SNLI47.250.9103.81323
TB-FBAdv. AccASR%#Query#Adv
SST-235.362.653.0207
Yelp18.481.3369.41025
AGNews53.145.3191.1948
SNLI18.181.250.12093
+ +Table 11: Statistics about FB attacks. We report the adversarial accuracy (Adv. Acc), attack success rate $(\mathrm{ASR}\%)$ , the number of queries (#Query), and the number of adversarial examples examined. + +
TFTF-FBBAEBAE-FB
PPL75.877.141.940.9
FGWS86.287.183.781.4
RDE15.624.021.233.3
ADDMU93.789.392.287.6
+ +Table 12: Detection results on a BiLSTM victim model. The values are F1 score on $\mathrm{FPR} = 0.1$ . We see that AD-DMU still achieves the best performance on these two attacks. Note also that RDE, the previous SOTA results on BERT and RoBERTa actually breaks when trying to detect BiLSTM adversarial examples. + +Roth et al. (2019) use logodds on perturbed examples as statistics to conduct detection. Further, Athalye et al. (2018) have similar observations with us concerning image attacks. They find that the distance-based feature, local intrinsic dimension proposed in Ma et al. (2018) for AED fails when encounters FB adversarial examples. + +
SST-2 TFClean %#AugASR#Query
BERT95.8088.66%118.99
+ All95.61119977.58%140.74
+ LDLM95.6280082.52%137.50
+ HDLM95.8280078.25%142.26
+ LDHM95.8279975.30%145.79
+ HDHM95.3279977.67%142.42
+ +Table 13: ADA performances for FB version of different types of augmented data. We find that adversarial examples with low DU and high MU are most useful for ADA. + +
RegularFB
Regular87.290.2
FB82.377.1
+ +Table 14: Attack success rate for four settings of augmentation. The columns are the augmented data. The rows are the attack types. + +
SST-2AGNewsYelpSNLI
AttacksMethodsTPRF1AUCTPRF1AUCTPRF1AUCTPRF1AUC
TFRDE62.972.886.596.093.297.072.079.289.646.359.381.0
RDE-aug63.673.386.697.494.097.470.177.989.841.054.379.9
DIST64.073.487.994.592.495.973.880.390.637.250.474.5
DIST-aug60.270.586.594.092.096.975.781.490.838.351.575.2
MU51.964.285.982.085.494.571.779.090.165.174.489.1
DU60.671.187.898.994.698.376.382.090.659.670.385.6
ADDMU67.175.888.899.294.998.678.783.591.668.977.089.7
TF-FBRDE31.945.081.571.979.192.531.544.682.743.156.479.6
RDE-aug36.650.280.490.890.595.961.571.887.837.651.078.9
DIST20.726.381.666.675.491.854.864.386.227.239.669.9
DIST-aug50.562.484.081.985.394.564.073.588.529.642.371.0
MU54.466.385.490.790.496.570.778.389.160.270.887.0
DU55.467.184.597.293.997.570.177.988.431.644.678.2
ADDMU59.470.687.397.594.097.872.879.789.753.665.887.5
BAERDE44.257.379.396.493.796.365.274.589.141.755.076.8
RDE-aug49.361.982.485.687.894.361.771.988.544.957.980.3
DIST44.957.378.994.291.996.268.076.289.436.849.767.9
DIST-aug38.150.777.886.387.994.766.174.889.738.351.669.8
MU41.755.078.886.788.394.164.674.188.644.457.576.9
DU45.958.983.397.594.398.171.578.589.744.757.880.5
ADDMU45.958.982.396.493.597.372.579.590.148.261.081.0
BAE-FBRDE19.530.272.568.877.091.266.475.488.134.647.974.0
RDE-aug48.261.082.663.473.491.166.175.188.940.854.279.4
DIST17.726.170.164.968.191.469.777.388.429.542.362.9
DIST-aug28.740.072.470.376.391.671.578.089.831.544.065.4
MU49.762.382.383.786.494.074.580.889.936.549.973.1
DU56.467.884.484.787.093.474.580.890.222.934.574.3
ADDMU51.464.184.683.785.994.176.381.990.634.948.476.0
PruthiRDE41.455.180.677.482.892.452.664.888.034.647.876.5
RDE-aug40.553.978.987.488.794.164.774.388.035.448.777.3
DIST55.061.482.977.882.092.166.772.288.223.635.265.1
DIST-aug50.561.484.181.284.694.169.275.689.526.438.767.4
MU48.661.485.389.589.995.577.583.190.761.872.086.8
DU55.766.882.795.893.897.372.479.688.826.639.074.4
ADDMU55.967.485.496.793.997.478.883.791.855.767.186.0
Pruthi-FBRDE20.030.872.659.570.487.634.347.985.231.244.274.9
RDE-aug26.739.074.567.776.491.860.471.187.031.044.076.0
DIST23.326.574.655.161.687.254.555.284.921.632.863.3
DIST-aug25.635.076.069.676.391.359.769.687.523.835.465.7
MU56.267.785.280.384.894.567.976.891.760.771.185.5
DU56.268.583.179.183.993.567.275.986.413.922.470.3
ADDMU56.268.785.880.484.995.068.777.091.744.958.082.5
TBRDE72.479.689.696.193.396.966.275.289.251.864.183.0
RDE-aug54.366.185.095.693.096.961.771.987.845.958.980.9
DIST72.478.690.695.692.896.270.277.990.250.762.782.6
DIST-aug72.979.189.793.091.696.370.578.090.552.064.283.1
MU67.476.088.979.884.194.567.075.788.960.270.888.6
DU77.882.990.298.494.798.069.377.389.266.975.488.9
ADDMU73.380.090.999.094.898.470.878.391.069.077.190.6
TB-FBRDE29.542.582.168.977.191.763.973.588.447.860.682.2
RDE-aug42.055.480.286.688.294.759.670.387.540.754.080.1
DIST34.344.082.663.472.991.569.877.689.340.853.979.0
DIST-aug49.859.084.680.484.393.671.878.990.443.957.079.8
MU55.967.485.891.891.096.172.279.489.657.768.887.0
DU58.169.285.094.192.296.572.779.689.240.954.281.5
ADDMU50.562.986.194.292.696.974.881.090.851.163.687.0
+ +Table 15: Ablation of detection performance of regular and FB adversarial examples (*-FB) against BERT on SST-2, AGNews, Yelp, and SNLI. We compare ADDMU with soley DU, solely MU, and two enhanced baselines RDE-aug and DIST-aug. The best performance is bolded. Results are averaged over three runs with different random seeds. + +
SST-2AGNews
AttacksMethodsTPRF1AUCTPRF1AUC
TFPPL34.047.273.778.283.192.0
MSP71.078.589.893.591.997.2
RDE73.980.489.890.690.495.5
RDE-aug61.371.687.161.771.987.9
DIST70.377.990.294.692.596.5
DIST-aug72.778.890.183.886.494.6
MU78.082.991.198.794.697.6
DU70.179.289.595.993.297.6
ADDMU78.483.991.398.894.998.3
TF-FBPPL43.857.079.684.486.994.1
MSP55.366.985.030.543.587.6
RDE40.553.884.657.068.388.7
RDE-aug46.759.782.448.160.981.9
DIST48.758.285.147.059.789.8
DIST-aug48.760.885.867.475.490.9
MU55.966.989.277.482.693.5
DU55.164.284.588.489.695.7
ADDMU54.666.888.588.689.295.8
BAEPPL17.227.164.038.151.574.0
MSP48.160.978.693.491.997.2
RDE53.865.780.377.282.593.2
RDE-aug53.565.584.452.964.982.6
DIST48.160.379.788.388.995.0
DIST-aug48.561.280.972.479.091.3
MU55.766.981.893.792.095.9
DU52.464.084.692.291.296.2
ADDMU55.867.084.997.694.197.9
BAE-FBPPL27.039.668.734.547.873.6
MSP31.444.669.877.082.489.8
RDE25.838.172.557.068.389.4
RDE-aug40.353.877.961.671.989.7
DIST25.831.271.236.047.289.9
DIST-aug30.842.773.862.070.289.5
MU43.456.276.592.091.395.0
DU37.751.378.179.586.993.3
ADDMU44.457.178.392.091.995.7
PruthiPPL34.047.674.431.444.473.9
MSP62.072.183.170.278.093.0
RDE57.068.383.361.671.989.7
RDE-aug52.064.284.440.854.281.1
DIST63.070.683.165.574.692.5
DIST-aug52.064.684.372.277.591.1
MU73.077.189.394.592.597.5
DU58.068.384.985.187.395.5
ADDMU77.082.488.092.591.597.6
Pruthi-FBPPL23.435.371.027.940.671.5
MSP40.654.276.312.520.583.5
RDE51.664.178.435.348.782.1
RDE-aug37.551.175.726.539.174.6
DIST43.822.877.325.534.483.4
DIST-aug40.647.879.656.667.086.1
MU64.162.984.972.878.492.8
DU39.151.181.161.871.690.9
ADDMU64.174.589.175.782.193.4
TBPPL45.558.681.276.782.291.2
MSP74.781.191.891.490.896.8
RDE76.882.492.086.087.884.5
RDE-aug60.671.288.457.468.685.6
DIST76.380.791.593.491.796.0
DIST-aug77.380.191.880.184.093.9
MU78.382.392.498.394.497.3
DU74.280.490.794.092.197.0
ADDMU78.882.992.498.394.597.9
TB-FBPPL42.755.981.678.983.792.6
MSP57.569.486.329.842.787.6
RDE52.064.386.247.760.687.3
RDE-aug45.658.681.843.556.878.3
DIST48.556.586.145.258.089.4
DIST-aug48.059.785.560.070.689.4
MU58.570.089.784.286.895.0
DU53.964.284.681.984.592.8
ADDMU64.974.287.785.588.396.8
+ +Table 16: Detection performance of regular and FB adversarial examples (*-FB) against RoBERTa on SST-2, AGNews. Our proposed ADDMU outperforms other methods. The best performance is bolded. Results are averaged over three runs with different random seeds. \ No newline at end of file diff --git a/addmudetectionoffarboundaryadversarialexampleswithdataandmodeluncertaintyestimation/images.zip b/addmudetectionoffarboundaryadversarialexampleswithdataandmodeluncertaintyestimation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..b9a2cf4aa02458d6234d1c69336a962408a966c0 --- /dev/null +++ b/addmudetectionoffarboundaryadversarialexampleswithdataandmodeluncertaintyestimation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:56539685ae1cddd81a831c101819b74f24c3e00209a0835562352fa391a4c6d8 +size 1610343 diff --git a/addmudetectionoffarboundaryadversarialexampleswithdataandmodeluncertaintyestimation/layout.json b/addmudetectionoffarboundaryadversarialexampleswithdataandmodeluncertaintyestimation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..39c808368aa3718d183f106294dbcd682003c26c --- /dev/null +++ b/addmudetectionoffarboundaryadversarialexampleswithdataandmodeluncertaintyestimation/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f46d4330e9b2be0ed5fb36658f902704263c30090ef3be4e1d58c797418dc9bf +size 491723 diff --git a/adistributionallensformultiaspectcontrollabletextgeneration/433cf22b-7989-4cdb-be50-217d0d36fa5f_content_list.json b/adistributionallensformultiaspectcontrollabletextgeneration/433cf22b-7989-4cdb-be50-217d0d36fa5f_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..c61e81732c84af39d805f790df4fd86db3a911a4 --- /dev/null +++ b/adistributionallensformultiaspectcontrollabletextgeneration/433cf22b-7989-4cdb-be50-217d0d36fa5f_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8098cbfad923681ec606396c2f487611b386fc4de90742e4800e3472b9602774 +size 127273 diff --git a/adistributionallensformultiaspectcontrollabletextgeneration/433cf22b-7989-4cdb-be50-217d0d36fa5f_model.json b/adistributionallensformultiaspectcontrollabletextgeneration/433cf22b-7989-4cdb-be50-217d0d36fa5f_model.json new file mode 100644 index 0000000000000000000000000000000000000000..5c498d0ba5fc732c53ca30a8106406bb92979bf7 --- /dev/null +++ b/adistributionallensformultiaspectcontrollabletextgeneration/433cf22b-7989-4cdb-be50-217d0d36fa5f_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f21ab764f34655fe2f1c9ac6d97361cd75bb33a0a2e3cfffbc96b07271259846 +size 146317 diff --git a/adistributionallensformultiaspectcontrollabletextgeneration/433cf22b-7989-4cdb-be50-217d0d36fa5f_origin.pdf b/adistributionallensformultiaspectcontrollabletextgeneration/433cf22b-7989-4cdb-be50-217d0d36fa5f_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..b755f99c2449bc934fd1d6bac11e86d327bae2b0 --- /dev/null +++ b/adistributionallensformultiaspectcontrollabletextgeneration/433cf22b-7989-4cdb-be50-217d0d36fa5f_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:91889576924c105ece548964fae35012081a1482b0f6c4ac11bc40e155ff0019 +size 6182316 diff --git a/adistributionallensformultiaspectcontrollabletextgeneration/full.md b/adistributionallensformultiaspectcontrollabletextgeneration/full.md new file mode 100644 index 0000000000000000000000000000000000000000..ae72f0f46b62c194ba48b257aedfbd21c14980ad --- /dev/null +++ b/adistributionallensformultiaspectcontrollabletextgeneration/full.md @@ -0,0 +1,419 @@ +# A Distributional Lens for Multi-Aspect Controllable Text Generation + +Yuxuan Gu†,Xiaocheng Feng†‡,Sicheng Ma†,Lingyuan Zhang†,Heng Gong†,Bing Qin†‡ + +†Harbin Institute of Technology ‡ Peng Cheng Laboratory + +{yxgu,xcfeng,scma,lyzhang,hgong,bqin}@ir.hit.edu.cn + +# Abstract + +Multi-aspect controllable text generation is a more challenging and practical task than single-aspect control. Existing methods achieve complex multi-aspect control by fusing multiple controllers learned from single-aspect, but suffer from attribute degeneration caused by the mutual interference of these controllers. To address this, we provide observations on attribute fusion from a distributional perspective and propose to directly search for the intersection areas of multiple attribute distributions as their combination for generation. Our method first estimates the attribute space with an autoencoder structure. Afterward, we iteratively approach the intersections by jointly minimizing distances to points representing different attributes. Finally, we map them to attribute-relevant sentences with a prefix-tuning-based decoder. Experiments on the three-aspect control task, including sentiment, topic, and detoxification aspects, reveal that our method outperforms several strong baselines on attribute relevance and text quality and achieves the SOTA. Further analysis also supplies some explanatory support for the effectiveness of our approach1. + +# 1 Introduction + +Controllable text generation is a challenging task in natural language generation, which aims to generate fluent text with desired attributes. Pilot studies attempt single-aspect control by directly finetuning a conditional model (Ziegler et al., 2019; Keskar et al., 2019), or turn to methods with language models fixed (Dathathri et al., 2020) due to the high cost of large-scale pre-trained language models (Brown et al., 2020a; Zhang et al., 2022). + +Recent works focus on a more practical setting, multi-aspect $^2$ controllable text generation, with existing approaches mainly divided into three tech- + +![](images/ca5c0239497c1f51a7b442671d167f70e8ccd07ae1287f189bd679e4cc3f9bac.jpg) +Figure 1: Probability space of attributes. Orange background denotes the estimated distribution over natural language. Blue and green areas represent distributions over sentences containing attributes from two different aspects, respectively. The darker region means a higher probability in the space. The shaded are distributional centers, the areas with the highest probability density. + +![](images/ba7cbf224680893e48af53bb0de9f9aa03f41105447c398aa2840e04a412eb1b.jpg) + +nical routes: weighted decoding (Dathathri et al., 2020; Krause et al., 2021), multi-objective optimization (Kumar et al., 2021; Mireshghallah et al., 2022), and prefix-tuning (Qian et al., 2022), which explore ways to combine controllers learned from single-aspect and apply them to a fixed language model yet suffering from attribute degeneration caused by the mutual interference of controllers. + +We provide a distributional perspective to observe and alleviate this problem. In the current text generation paradigm, a language model forms an estimated distribution over sentences with training data amounted to sampling from natural language distribution (Pillutla et al., 2021). For single-aspect control, these methods train a classifier or a prefix for each attribute independently, which is regarded as appraising a center of distribution over attribute-relevant sentences, before biasing the language model's distribution to this center. Correspondingly, when generalizing to multi-aspect control, their fusion strategy is directly obtaining interpolation or average of these centers, which may be too straightforward. As shown in Figure 1, the interpolation point denotes the position they acquired after combining multiple centers in the probability space. And the intersection represents + +where oracle sentences that simultaneously satisfy multiple attributes lie. In the left part of Figure 1, when distributions of attributes is symmetric3, the interpolation point is indeed within the intersection area. However, there could be a mismatch between the interpolation point and intersection. For example, as illustrated in the right part of Figure 1, two skewed distributions intersect on the tails, leaving the interpolation point out of the intersection area and thus making it lack the ability to express all desired attributes together. + +In this paper, different from approximating the intersection area with the interpolation point, we propose a strategy for directly acquiring the intersection. We first deploy an autoencoder structure to map attribute-relevant sentences to latent representations constituting an estimated attribute space. With our specially designed constraints, this space can model relationships among attributes. Afterward, we provide an effective intersection searching algorithm that can walk around the long tail regions in distributions of all desired attributes and iteratively find where they combine more tightly. Finally, we utilize a prefix-tuning-based decoder to construct sentences from the searched intersection. + +We experiment on three-aspect control with two attributes from the sentiment aspect, four from the topic, and one from detoxification, with datasets IMDb movie reviews (Maas et al., 2011), AGNews (Zhang et al., 2015), and Jigsaw Toxic Comment Classification Challenge Dataset, respectively. We evaluate the relevance of each attribute independently and calculate their average as the final relevance metric. Besides, we assess the text quality with perplexity and distinctness concerning fluency and diversity. Results show that our method can significantly outperform strong baseline models on multi-aspect control. Furthermore, we find out in our analytical experiments that our intuitive assumptions fit well with our observation. The main contributions are as follows: + +- We propose a distributional perspective that models multi-aspect control more practically. +- We provide a method that directly searches for intersections in the attribute space and generates sentences with desired attributes. +- We experimentally reveal the effectiveness of our method on multi-aspect control compared to strong baselines and achieve the SOTA. + +# 2 Related Work + +Variational autoencoders are often used for controllable text generation in early work (Hu et al., 2017; Duan et al., 2020; Mai et al., 2020) where they spend a lot of effort into improving text fluency. The prosperity of large-scale pre-trained language models (Radford et al., 2019) provides more exploration directions for attribute control such as fine-tuning (Ficler and Goldberg, 2017; Ziegler et al., 2019; Keskar et al., 2019). Recent work has made gratifying progress on single-aspect control (Krause et al., 2021), leading studies gradually turn to a more difficult task, multi-aspect control, including the following three main approaches. + +Weighted Decoding As the scale of language models increases rapidly, weighted decoding (Dathathri et al., 2020; Krause et al., 2021; Yang and Klein, 2021; Liu et al., 2021a; Gu et al., 2022) becomes a simple and practical choice. It is a framework that decomposes the probability of sentences conditioned on attributes into a language model and a classifier with the bayesian rule directly at decoding time. When handling multi-aspect control, it can be easily generalized by interpolating classifiers (Lin and Riedl, 2021). + +Multi-Objective Optimization The controllable text generation task is naturally a multi-objective optimization problem when regarding its decoding process as an optimization objective. Some approaches, such as DGC (Khalifa et al., 2020), Mix&Match (Mireshghallah et al., 2022), and COLD Decoding (Qin et al., 2022), adopt Energy-based Models (LeCun et al., 2006) to blend multiple objectives. Others like MUCOCO (Kumar et al., 2021) convert the optimization objectives of multi-aspect control to inequality constraints and thereby apply the lagrange multiplier method for this constrained optimization problem. + +Prefix-Tuning GPT-3 (Brown et al., 2020b) provides a new paradigm named prompt-based learning (Liu et al., 2021b), which is able to perform few-shot learning on downstream tasks. PrefixTuning (Li and Liang, 2021) leverages the learned lightweight prompts to trigger the conditional generation capability of the language model. Applying Prefix-Tuning to multi-aspect controllable text generation (Yu et al., 2021; Qian et al., 2022; Carlsson et al., 2022; Yang et al., 2022) can be regarded as optimizing on multi-objective implicitly. + +![](images/8b31c47317128126b577135d8a0a67474538799a97eb7b56d58ab9dfe63f4d6a.jpg) +Figure 2: An overview of our method. Top: Illustration of our autoencoder structure with prefix-tuning deployed on the fixed decoder, where latent representations $\mathcal{H}_i$ constitute an estimated attribute space. Bottom Left: Illustration of attribute classification loss $\mathcal{L}_C$ and aspect gap loss $\mathcal{L}_G$ attached to the attribute space. Bottom Right: Inferencing stage with prefix mapped from the intersection of attributes. + +# 3 Methodology + +In this section, we first introduce the motivation and overall process of our method, after which we describe each module in detail. + +# 3.1 Overview + +As illustrated in Figure 2, our method mainly revolves around the attribute space including estimating the attribute space, searching for intersections, and mapping intersections to sentences. + +Firstly, we aim to construct an attribute space using sampled sentences to estimate the real space as accurately as possible. We employ an autoencoder structure with the latent representations denoting points that constitute our estimated attribute space. To ensure that our estimated space reliably models the attributes, such as their probability distributions and relationships between different attributes, we further attach three constraints to the representation. (I) Reconstruction Loss $\mathcal{L}_R$ aims to bridge the gap between points in attribute space and natural attribute-relevant sentences, which is recovering attributes reflected by contents. (II) Attribute Classification Loss $\mathcal{L}_C$ forces the encoder to focus more on capturing attributes by distinguishing points of different attributes from the same aspect. (III) Aspect Gap Loss $\mathcal{L}_G$ penalizes the discrepancy of aspects, which is caused by the domain gap among different data sources for different aspects. Inspired by the feature alignment (Pan et al., 2010), we minimize the distances between distributional centers of each two aspects. + +The second step aims to search for an intersec + +tion area of desired attributes. If the intersection area exists, a point in the area satisfies that neighbor points appearing in a tiny surrounding region should cover all required attributes. Inspired by this neighborhood ideology, we design an algorithm that iteratively approaches an area where these attributes bind more tightly. The third step maps our searched intersection to a Prefix that activates the language model to generate attribute-relevant sentences. To make the language model less sensitive to slight variations, we sample a perturbation vector from a multivariate gaussian distribution. + +# 3.2 Estimating Attribute Space + +Given $|\mathbf{A}|$ aspects $\mathbf{A} = \left\{A_{1},\dots ,A_{|\mathbf{A}|}\right\}$ with each comprising $|A_{t}|$ attributes $\left\{a_1^t,\dots ,a_{|A_t|}^t\right\}$ , $I_{\tau}^{t}$ is an index set representing the identifiers of all sentences with attribute $a_{\tau}^{t}$ in the training data. We have $I^{t} = \bigcup_{\tau = 1}^{|A_{t}|}I_{\tau}^{t},I = \bigcup_{t = 1}^{|A|}I^{t}$ , where $I^{t}$ is the indices of all sentences with any attribute in aspect $A_{t}$ and $I$ is the indices of the entire training data. We encode sentences $\{X_i\}$ from all aspects $\mathbf{A}$ to representations $\{\mathcal{H}_i\}$ with unified mapping parameters $\phi \colon \mathcal{H}_i = \operatorname {Encode}_{\phi}(X_i)$ , where $i\in I$ . + +Reconstruction Loss $\mathcal{L}_R$ As in the top of Figure 2, $\mathcal{L}_R$ is computed in the same way as the autoregressive loss of pre-trained language model $p_{\mathrm{LM}}$ : + +$$ +\mathcal {L} _ {R} = - \sum_ {i \in I} \log p _ {\mathrm {L M}} \left(X _ {i} | \text {P r e f i x} _ {i}\right) \tag {1} +$$ + +$$ +\operatorname {P r e f i x} _ {i} = \operatorname {M L P} _ {\theta} \left(\mathcal {H} _ {i} + \lambda \varepsilon_ {i}\right), \varepsilon_ {i} \sim \mathcal {N} (\mathbf {0}, \mathbf {I}), +$$ + +where $X_{i}$ here is a sample sentence from the entire training set, i.e., $i\in I$ . Besides, $\varepsilon_{i}$ , with a scaling factor $\lambda$ , is a perturbation vector sampled from a multivariate gaussian distribution $\mathcal{N}(\mathbf{0},\mathbf{I})$ for robustness when reconstructing. The multilayer perceptron $\mathrm{MLP}_{\theta}$ will map perturbed $\mathcal{H}_i$ to $\mathrm{Prefix}_i$ that can activate the language model to generate text with desired attributes. It's worth noting that our primary goal is to recover attributes, which means $\mathcal{L}_R$ does not need and preferably does not converge too well while maintaining text fluency. + +Attribute Classification Loss $\mathcal{L}_C$ We force the encoder to focus on attributes by $\mathcal{L}_C$ in the way: + +$$ +\mathcal {L} _ {C} = - \sum_ {t = 1} ^ {| \mathbf {A} |} \sum_ {\tau = 1} ^ {| A _ {t} |} \sum_ {i \in I _ {\tau} ^ {t}} \log p _ {\pi_ {t}} \left(a _ {\tau} ^ {t} \mid \mathcal {H} _ {i}\right). \tag {2} +$$ + +Given sentence representation, $p_{\pi_t}$ is a classifier that distinguish attributes $\{a_{\tau}^{t}\}$ from aspect $A_{t}$ with parameter $\pi_t$ + +Aspect Gap Loss $\mathcal{L}_G$ We penalize the discrepancy between distributional centers by: + +$$ +\mathcal {L} _ {G} = \sum_ {1 \leq t _ {1} < t _ {2} \leq | \mathbf {A} |} \left\| \sum_ {i \in I ^ {t _ {1}}} \frac {\mathcal {H} _ {i}}{| I ^ {t _ {1}} |} - \sum_ {j \in I ^ {t _ {2}}} \frac {\mathcal {H} _ {j}}{| I ^ {t _ {2}} |} \right\| _ {2}, \tag {3} +$$ + +which are Euler distances between every two distinct distributional centers. When generalizing to a larger scale of aspects, it is relatively expensive to calculate averages over the entire dataset each time the model is updated. We calculate this loss in practice using a batch-level approximation. We assign each aspect a memory unit to store the latest representation of the aspect's estimated center. Each time processing a batch of sentences from one aspect, we take the average of their representations as the center and sum up the Euler distances to centers of other aspects in the memory, which is the estimated $\mathcal{L}_G$ . Then, we update the memory unit of this aspect to the latest. + +During the training stage, our loss function is: + +$$ +\mathcal {L} = w _ {1} \mathcal {L} _ {R} + w _ {2} \mathcal {L} _ {C} + w _ {3} \mathcal {L} _ {G}. \tag {4} +$$ + +It's worth noting that we only update parameters $\phi, \theta$ , and $\{\pi_t\}$ for the encoder, the MLP layer, and the classifier heads, respectively. + +# 3.3 Intersection of Attributes + +Suppose there is an intersection point, denoted as $\tilde{\mathcal{H}}^*$ , located within the intersection region of attributes $\left\{a_{\alpha_1}^1, a_{\alpha_2}^2, \dots, a_{\alpha_N}^N\right\}$ from $N$ different as + +Algorithm 1 Intersection Searching +Input: $\mathcal{H}_i, i \in \bigcup_{t=1}^N I_{\alpha_t}^t$ from $N$ attributes $\omega_{\alpha_t}$ weight of each attribute +Output: Intersection of $N$ attributes: $\tilde{\mathcal{H}}^*$ +1: Initialize $M$ candidates: $\{\tilde{\mathcal{H}}_m^0\}$ +2: Iterate $S$ times +3: for $s$ in $[0, S-1]$ do +4: for $m$ in $[1, M]$ do +5: $\tilde{\mathcal{H}}_m^{s+1} \gets 0$ +6: for $t$ in $[1, N]$ do +7: $\mathbf{H} \gets \text{Nearest}(\tilde{\mathcal{H}}_m^s, \{\mathcal{H}_i, i \in I_{\alpha_t}^t\})$ +8: $\tilde{\mathcal{H}}_m^{s+1} \gets \tilde{\mathcal{H}}_m^{s+1} + \omega_{\alpha_t}$ mean( $\mathbf{H}$ ) +9: end for +10: $\tilde{\mathcal{H}}_m^{s+1} \gets \tilde{\mathcal{H}}_m^{s+1} / \sum_{t=1}^N \omega_{\alpha_t}$ +11: end for +12: end for +13: $\tilde{\mathcal{H}}^* \gets \text{Select}(\{\tilde{\mathcal{H}}_m^S\})$ + +pects, where $a_{\alpha_t}^t$ is the $\alpha_{t}$ th attribute in aspect $A_{t}$ . Our algorithm 1 approximates the $\tilde{\mathcal{H}}^{*}$ by iteratively approaching a most balanced point with nearest neighbors from different attributes. First, we initialize the candidates $\{\tilde{\mathcal{H}}_m^0\}$ by randomly sampling points in the attribute space, calculating their distance to the closest point of each attribute $a_{\alpha_t}^t$ , and selecting the top $M$ samples with the smallest average distance to all attributes. At each iteration $s$ , we choose the top- $\mathbf{K}^4$ nearest points to $\tilde{\mathcal{H}}_m^s$ for each attribute and update $\tilde{\mathcal{H}}_m^{s + 1}$ using the weighted average of these points. It is worth mentioning that $\omega_{\alpha_t}$ is the weight used to balance attributes or favor some specifically, and a negative value of $\omega_{\alpha_t}$ can even move away from a particular one. Finally, we select the best candidate from the last iteration $S$ , which is expected to be in the intersection region, i.e., a representation related to multiple attributes. + +# 3.4 Generation with Intersections + +As illustrated in the right bottom of Figure 2, we convert the representation $\tilde{\mathcal{H}}^*$ obtained from the intersection area directly to the Prefix with $\mathrm{MLP}_{\theta}$ and let the language model generate multi-attributed sentence $Y$ from input $\mathcal{X}$ as: + +$$ +\begin{array}{c} Y = \arg \max _ {y} p _ {\mathrm {L M}} (y | \text {P r e f i x} ^ {*}; \mathcal {X}) \\ \text {P r e f i x} ^ {*} = \mathrm {M L P} _ {\theta} (\tilde {\mathcal {H}} ^ {*} + \lambda \varepsilon_ {i}), \varepsilon_ {i} \sim \mathcal {N} (\mathbf {0}, \mathbf {I}). \end{array} \tag {5} +$$ + +When generating several attribute-relevant sentences for one attribute combination, we only need to calculate the intersection for it once. + +# 4 Experiment + +In this section, we demonstrate the effectiveness of our method on three-aspect control, including sentiment, topic, and detoxification. + +# 4.1 Multi-Aspect Control Task + +The datasets we use are the same as GeDi (Krause et al., 2021) and Contrastive Prefix (Qian et al., 2022). To balance the data scale across all aspects, we randomly sample $10\mathrm{k}$ sentences from each dataset that is less than the number of samples GeDi uses, with each attribute equally dividing this amount. We use the IMDb movie reviews (Maas et al., 2011), the AGNews dataset (Zhang et al., 2015), and the Jigsaw Toxic Comment Classification Challenge Dataset for sentiment, topic and detoxification aspects, respectively. + +The prompts used for text generation are the same as those used in the PPLM (Dathathri et al., 2020), with 20 from its bag-of-words experiment and 15 from its discriminator experiment. We experiment with 8 combinations of the 3 aspects with 2 sentiments $\times 4$ topics $\times 1$ detoxification and generate 5 completions for each combination and each prompt. Totally, each model will generate $35 \times 2 \times 4 \times 1 \times 5 = 1400$ sentences. It is worth noting that we do not specifically use prompts that induce the language model to generate toxic text, making detoxification easier to improve. + +To measure the performance on different aspects, we compute the attribute relevance. We finetune a DeBERTa (He et al., 2021b,a) classifier on the Yelp dataset (Zhang et al., 2015) for sentiment aspect and a classifier for topic utilizing all its remaining data not used during training. We evaluate the nontoxicity with the Google Perspective API6. The final performance of a model is determined by the average of these three attribute relevance scores introduced above. We also use two auxiliary metrics to measure text quality. One is perplexity calculated by GPT2-large following Contrastive Prefix (Qian et al., 2022). To ensure that models are not insensitive to changes in different prefixes, we calculate the Distinctness (Li et al., 2016) of sentences + +generated from different prefixes and average the 1-gram, 2-grams, and 3-grams distinct scores for simplicity. Moreover, we conduct human evaluation with sentences generated by different models shuffled. Each sentence is rated by three professional evaluators for 3 attribute relevance and text fluency. Evaluators rate each item on a scale of 1 to 5, with 5 representing text highly related to the desired attribute or very fluent. + +# 4.2 Baselines + +(I) Weighted Decoding: PPLM (Dathathri et al., 2020) biases the language model with gradients back-propagated from trained classifiers. GeDi (Krause et al., 2021) influences the decoding process with token probabilities conditioned on attributes. (II) Multi-objective Optimization: MU-COCO (Kumar et al., 2021) regards the decoding process as a constrained optimization problem, where the language model is the objective function and attributes are constraints. Mix&Match (Mireshghallah et al., 2022) controls attributes with energy-based models and generates sentences by masking, sampling, and correcting. (III) Prefix-Tuning: Contrastive Prefix (Qian et al., 2022) utilizes prefixes to activate the language model to generate attribute-relevant sentences by concatenation or semi-supervision. + +# 4.3 Results + +According to the automatic evaluation results in Table 1, under the multi-aspect setting, we group models based on their type of methods in chronological order. In addition, we demonstrate their standard deviations, which reflect the stability of models among different attribute combinations. + +For weighted decoding, GeDi uses more powerful classifiers than PPLM and performs better on attribute relevance, stability to different combinations, and distinctness while correspondingly worse on perplexity. Multi-objective optimization methods achieve a favorable performance on attribute relevance while MUCOCO explodes on perplexity due to its non-autoregressive paradigm not being suitable for generating from scratch. Performance of semi-supervised Contrastive Prefix is similar to GeDi, except for lack of diversity. + +Our method performs best on average attribute-related metrics, with at least a $7.3\%$ significant improvement over existing baselines. Our advances mainly come from sentiment and topic aspects, with no less than $13.9\%$ and $10.3\%$ each. Al + +
MethodsAverage↑ (%)Sentiment↑ (%)Topic↑ (%)Detoxification↑ (%)PPL.↓Dist.↑
Weighted Decoding Based Methods
PPLM71.0 ± 21.464.7 ± 24.863.5 ± 22.784.9 ± 6.562.662.0
GeDi81.4 ± 14.776.1 ± 17.273.8 ± 11.394.2 ± 1.9116.675.1
Multi-Objective Optimization Based Methods
MUCOCO73.9 ± 24.165.0 ± 33.767.2 ± 18.389.5 ± 3.5405.649.7
Mix&Match79.7 ± 21.873.5 ± 25.969.9 ± 21.195.8 ± 1.963.061.8
Prefix-Tuning Based Methods
Contrastive Prefix
concatenation77.2 ± 18.567.3 ± 20.771.8 ± 16.592.6 ± 2.954.639.9
semi-supervised81.3 ± 16.574.4 ± 19.676.9 ± 16.792.7 ± 3.531.943.3
Ours87.4 ± 10.986.7 ± 10.584.8 ± 14.290.7 ± 7.428.449.5
w/o LG80.9 ± 16.271.6 ± 11.775.9 ± 18.995.3 ± 2.671.558.9
w/o LC62.3 ± 41.849.1 ± 49.841.7 ± 36.096.0 ± 0.1473.037.0
+ +though our model is not the best on detoxification, it is the most balanced and stable according to the lowest standard deviation on average, 10.9. As a prefix-tuning-based method inducing the language model without direct modification, which is naturally good at text fluency, we perform well on perplexity and inherit the performance on diversity. + +Furthermore, we conduct ablation on aspect gap loss $\mathcal{L}_G$ and attribute classification loss $\mathcal{L}_C$ separately. On the one hand, without $\mathcal{L}_G$ , we can not alleviate the bias in different training datasets, making it hard to search for the intersection areas. Since training sentences of sentiment and topic aspects are mainly non-toxic, our model focuses more on detoxification rather than struggling for the other two, leading to considerable declines on their relevance while slight improvements on detoxification. Besides, as the distance among sample points from different aspects in the attribute space increases, our model will generate sentences mapped from far more sparse areas, leading to a small decrease on fluency and a subtle increase on diversity. On the other hand, without $\mathcal{L}_C$ , our attribute space will totally collapse. The relevance of sentiment and topic drops drastically while the non-toxicity boosts because model can hardly distinguish representations of different attributes in the same aspect and focus on relatively more effortless detoxification. Worse still, without distinct representations, our model is required to recover different sentences from similar ones, leading to oscillation in training and hardly generating complete text when inferencing. + +Results of human evaluation are in Table 2, with inter-annotator agreement being 0.36 in Fleiss' $\kappa$ . We evaluate GeDi, Contrastive Prefix, and our method and observe that the results are consistent with the automatic ones on sentiment and topic relevance. The performance of models on detoxi + +Table 1: Automatic Results on Multi-Aspect Control. Hyperparameters and details are in $\S \mathbf{B}$ + +
MethodsSent.↑Topic↑Detox.↑Fluency↑
GeDi2.962.724.593.08
Con. Prefix2.842.904.402.26
Ours3.473.394.713.69
+ +Table 2: Human Evaluation on Multi-Aspect Control. + +fication is high and relatively similar, making the automatic results different from the manual ones where the annotators believe that our model does a better job than baselines. Since perplexity is relatively unreliable, the manually measured fluency of GeDi is much better than that of the Contrastive Prefix. And our method achieves the best fluency. + +# 5 Analysis + +# 5.1 Effect of Different Attributes and their Combinations + +We illustrate the detailed results of each attribute and their combinations in Table 3. GeDi and Prefix-tuning perform differently in single-aspect control, each with its advantages. For example, GeDi is dedicated to negative with $93.9\%$ relevance, while Prefix-tuning is good at positive with $90.6\%$ relevance. When dealing with multi-aspect control, they inherit such imbalanced characteristics, with average relevance of $91.1\%$ and $79.1\%$ , respectively. In addition, the baselines decrease correspondingly in the average relevance of each attribute compared to single-aspect, ranging from 0.7 to 33.0. On average, our model outperforms other baselines on attribute metrics (Table 1). In detail, our model performs competitively for most attributes compared to another prefix-tuning-based model, Contrastive Prefix. Especially, on attributes like business and sci/tech, our model significantly improves over another prefix-tuning-based method on multi-aspect control and can even surpass it + +
MethodsSentiment (%)Topic (%)Detox. (%)
Neg.Pos.WorldSportsBusinessSci./Tech.
Weighted Decoding Based Methods
GeDi single-aspect93.970.773.485.775.798.094.9
GeDi94.7-80.0---90.6
84.2--74.8--93.9
94.9---75.7-96.6
90.6----80.192.8
-53.761.4---94.4
-60.5-74.3--95.2
average-57.6--54.3-95.7
-72.3---90.294.2
91.1 (-2.8)61.0 (-9.7)70.7 (-2.7)74.6 (-11.1)65.0 (-10.7)85.2 (-12.8)94.2 (-0.7)
Prefix-Tuning Based Methods
Prefix single-aspect88.490.674.585.393.593.693.8
Contrastive Prefix semi-supervised65.5-80.6---91.8
67.2--90.3--92.5
56.0---79.2-92.2
90.0----93.384.8
-93.564.8---95.1
-41.8-78.5--94.8
average-87.4--41.7-95.2
-93.6---86.795.3
69.7 (-18.7)79.1 (-11.5)72.7 (-1.8)84.4 (-0.9)60.5 (-33.0)90.0 (-3.6)92.7 (-1.1)
Ours69.7-71.7---84.1
78.6--80.0--80.2
99.9---96.7-96.8
92.8----98.081.7
-80.558.0---95.1
-84.7-86.6--94.5
-87.6--91.7-98.1
average-99.7---96.195.4
85.3 (-3.1)88.1 (-2.5)64.9 (-9.6)83.3 (-2.0)94.2 (+0.7)96.8 (+3.2)90.7 (-3.1)
+ +Table 3: Detailed Results on Single-Aspect and Multi-Aspect Control. We demonstrate results on single-aspect and average results on multi-aspect control with their difference to single-aspect, where other rows each represent an attribute combination. Cases are in §C. Detailed results for other baseline models and our ablations are in §D. + +under single-aspect control. + +In addition, correlations between attributes vary widely, as in Table 3. For example, generally, positive fits well with non-toxic while negative leads to a massive drop in non-toxicity, which is consistent with the intuition that one can hardly praise people and offend them simultaneously. Besides, world and business news are often reported negatively, such as war, famine, inflation, etc., making it challenging to combine them with positive. When attributes are not closely correlated, which means that few natural sentences possess these attributes together, our method is more likely to capture such a rarely occurred incident and magnify their frequency. Take business as an example. It is effortless to achieve a fine attribute relevance when performing single-aspect control on business, with GeDi achieving 75.7 and Prefix obtaining 93.5. After attaching positive to business, baseline models will suffer from a decline due to their weak correlation, where GeDi and Contrastive Prefix drop to 54.3 and 41.7, respectively. In contrast, our method can alleviate this problem by retrieving this unusual co-occurrence in the training sentences and recovering it from the attribute space, achieving a performance of 91.7, which is close to single-aspect + +![](images/379b6df3484ab045adf0a5946bb8645f205649d6d7703fb22fe8b5c16ec39ac7.jpg) +Figure 3: Projection of 4 attributes from attribute space. + +control. When combining business with negative, which is a relatively common combination, there is still some decrease for baseline models. On the contrary, our method can even obtain the performance of 96.7 that surpasses single-aspect control. + +# 5.2 Estimated Attribute Space + +We demonstrate part of our estimated attribute space in Figure 3 with four attributes: positive, negative, sports, and sci/tech from sentiment and topic aspects. We project the high-dimensional space + +
KAvg.↑Sent.↑Topic↑DeTox.↑
500075.570.567.988.2
400077.672.971.488.4
300078.772.474.788.9
200079.172.675.988.7
150079.973.677.189.0
100080.775.777.289.1
80082.979.379.290.3
50085.283.581.590.5
30085.784.183.289.7
20087.486.784.890.7
15084.079.284.388.4
10083.978.783.689.5
5082.278.478.589.6
2080.977.873.191.7
1080.879.671.591.2
581.482.969.392.1
385.086.177.791.1
178.863.180.992.4
+ +Table 4: Results that vary with $K$ . + +to 2D with Principal Component Analysis (PCA). Consistent with our hypothesis, distributions of sports and sci/tech are asymmetric and the intersections lie in the sparse edges of attributes' distribution. In addition, we project the intersections searched by the baseline's strategy and ours, respectively. For positive-sci/tech and negative-sci/tech pairs, the combinations are relatively tight, making it easy to find intersections. However, intersection areas for positive-sports and negative-sports pairs are considerably sparse. As shown in enlarged area, the baseline searched intersection is at the midpoint of the two distributional centers, but this location is not where the attributes intersect. On the contrary, our method can find an intersection in such a sparse region, making various points from the two different attributes appear simultaneously in its tiny surrounding area. It worth noting that positive and negative appear to intersect in this projection because they are close in the high-dimensional space. But there is actually no intersection if only projecting these two attributes in §A.3. + +# 5.3 Effect of $K$ + +We analyze the variation of $K$ in the intersection searching algorithm and demonstrate the results in Table 4. Our model reaches a critical point when $K$ is 200, and the performance is optimal this time. On the one hand, as the value of $K$ gradually increases, our method pays less attention to regions where samples are fewer while attributes combine more tightly, and the performance decreases accordingly. When $K$ reaches 5k, our method degenerates into a plain prefix-tuning model, which treats intersection as the midpoint of distributional centers. Its performance is similar and slightly inferior to + +![](images/5cd9f249c47b257500a697bc7ed6e79e97e644d9dbf6cd33089590ea8fa0c8ed.jpg) +Figure 4: Distribution of attribute World from Topic. + +the concatenation version of Contrastive Prefix in Table 1. On the other hand, smaller $K$ leads to suboptimal performance since the effect of noise becomes non-negligible in training data. When $K$ is less than 10, our model will be very unstable. + +# 5.4 Distribution of Attributes + +We project sample points to 2D by PCA, with each attribute projected independently. As in Figure 4, we display a scatterplot of World and conduct a Gaussian kernel density estimation to visualize its probability distribution. The darker area denotes a higher probability, where more representation points of oracle sentences gather. And the region annotated by a red ellipse is the estimated distributional center. As in the plot, the distribution of World is significantly asymmetric as the center lies in the top part, with the bottom being a sparse long tail. In addition, the distribution is even nonconvex with an isolated cluster in the lower right corner. This observation supports our hypothesis that the practical distributions of attributes are far more complex than symmetric distributions such as Gaussian distribution. Besides, we plot the distribution of other attributes in the §A.1. + +# 6 Discussion on Distributional Lens + +Pilot work such as DGC (Khalifa et al., 2020) estimates the language distribution with an energy-based model and optimizes this distribution to satisfy constraints by approaching the constraints manifold. Recent distributional approaches like COLD Decoding (Qin et al., 2022) and MuCoLa (Kumar et al., 2022) take the language and attribute distribution in the same space so as to sample attribute-related sentences with Langevin Dynamics. Concurrent work on the image side, PromptGen (Wu + +et al., 2022), simulates the complex distribution of images relevant to target attributes using a deep generative model. However, as a consensual hypothesis in manifold learning, the pre-trained language model estimates a low-dimensional manifold of language in a high-dimensional embedding space, which means most points in the embedding space are not probabilistically modeled by the language model. We believe that placing too much trust in the distributional modeling ability of language models is not a good choice. Our method attempts to depict the attribute space with discrete sample points of attributed sentences and make these discrete points, along with their coverage areas, compose the support set of our estimated distribution. + +# 7 Conclusion + +In this work, we present a distributional perspective for the multi-aspect controllable text generation with experimental results confirming the superiority of our model. Further observations on the 2D projection of the estimated attribute space show that our hypothesis about the attribute space is more feasible. In the future, we can explore the correlation between different attribute combinations for more fine-grained control and capture the bias in datasets to eliminate or utilize it. + +# Limitations + +Our method has a certain dependence on the data since we need to estimate an attribute space. Therefore, it is difficult for our method to perform well in the setting of few-shot learning. However, this disadvantage is not that severe, because we only need single-aspect data, which is relatively sufficient in style transfer tasks. Another dependence of our method on data is that it is somewhat sensitive to biases in the data. When the semantic divergence of different aspects in training data is too large, our aspect gap loss, which aims to reduce the distance among the distributions of each aspect, will conflict with the sentence reconstruction loss. As a result, it may be hard to obtain a reliable intersection in the attribute space. + +Computational resources also have an impact on our approach, as our aspect gap loss leverages a batch-level estimation for each aspect. Therefore, a larger batch size means a more accurate approximation, leaving the attribute space fewer biases. An alternative strategy for smaller batches is to + +backpropagate the loss after accumulating enough distributional samples, which requires more training epochs. + +# Ethics Statement + +We are totally aware that text generation technology has a potential to be used maliciously to generate fake, toxic, or offensive content. However, after training on the Detoxification aspect, controllable text generation technology is a powerful weapon for combating hate speech, and eliminating harmful information in pre-trained language models. In addition, our multi-aspect controllable text generation technology can take Detoxification as an default aspect when controlling other aspects. We believe it meaningful and beneficial to advance research on controllable text generation. + +# Acknowledgements + +Xiaocheng Feng is the corresponding author of this work. We thank the anonymous reviewers for their insightful comments. This work was supported by the National Key R&D Program of China via grant 2020AAA0106502, National Natural Science Foundation of China (NSFC) via grant 62276078 and the Major Key Project of PCL, PCL2021A06. + +# References + +Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020a. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877-1901. Curran Associates, Inc. + +Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020b. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901. + +Fredrik Carlsson, Joey Ohman, Fangyu Liu, Severine Verlinden, Joakim Nivre, and Magnus Sahlgren. 2022. Fine-grained controllable text generation using non-residual prompting. In Proceedings of the 60th Annual Meeting of the Association for Computational + +Linguistics (Volume 1: Long Papers), pages 6837-6857, Dublin, Ireland. Association for Computational Linguistics. +Sumanth Dathathri, Andrea Madotto, Janice Lan, Jane Hung, Eric Frank, Piero Molino, Jason Yosinski, and Rosanne Liu. 2020. Plug and play language models: A simple approach to controlled text generation. In International Conference on Learning Representations. +Yu Duan, Canwen Xu, Jiaxin Pei, Jialong Han, and Chenliang Li. 2020. Pre-train and plug-in: Flexible conditional text generation with variational autoencoders. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 253-262, Online. Association for Computational Linguistics. +Jessica Ficler and Yoav Goldberg. 2017. Controlling linguistic style aspects in neural language generation. In Proceedings of the Workshop on Stylistic Variation, pages 94-104, Copenhagen, Denmark. Association for Computational Linguistics. +Yuxuan Gu, Xiaocheng Feng, Sicheng Ma, Jiaming Wu, Heng Gong, and Bing Qin. 2022. Improving controllable text generation with position-aware weighted decoding. In Findings of the Association for Computational Linguistics: ACL 2022, pages 3449-3467, Dublin, Ireland. Association for Computational Linguistics. +Pengcheng He, Jianfeng Gao, and Weizhu Chen. 2021a. Debertav3: Improving deberta using electra-style pretraining with gradient-disentangled embedding sharing. arXiv preprint arXiv:2111.09543. +Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2021b. Deberta: Decoding-enhanced bert with disentangled attention. In International Conference on Learning Representations. +Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P. Xing. 2017. Toward controlled generation of text. In Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML'17, page 1587-1596. JMLR.org. +Nitish Shirish Keskar, Bryan McCann, Lav Varshney, Caiming Xiong, and Richard Socher. 2019. CTRL - A Conditional Transformer Language Model for Controllable Generation. arXiv preprint arXiv:1909.05858. +Muhammad Khalifa, Hady Elsahar, and Marc Dymetman. 2020. A distributional approach to controlled text generation. In International Conference on Learning Representations. +Ben Krause, Akhilesh Deepak Gotmare, Bryan McCann, Nitish Shirish Keskar, Shafiq Joty, Richard Socher, and Nazneen Fatema Rajani. 2021. GeDi: Generative discriminator guided sequence generation. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 4929-4952, Punta + +Cana, Dominican Republic. Association for Computational Linguistics. +Sachin Kumar, Eric Malmi, Aliaksei Severyn, and Yulia Tsvetkov. 2021. Controlled text generation as continuous optimization with multiple constraints. Advances in Neural Information Processing Systems, 34. +Sachin Kumar, Biswajit Paria, and Yulia Tsvetkov. 2022. Constrained sampling from language models via Langevin dynamics in embedding spaces. arXiv preprint arXiv:2205.12558. +Yann LeCun, Sumit Chopra, Raia Hadsell, M Ranzato, and F Huang. 2006. A tutorial on energy-based learning. +Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110-119, San Diego, California. Association for Computational Linguistics. +Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4582-4597, Online. Association for Computational Linguistics. +Zhiyu Lin and Mark Riedl. 2021. Plug-and-blend: A framework for controllable story generation with blended control codes. In Proceedings of the Third Workshop on Narrative Understanding, pages 62-71, Virtual. Association for Computational Linguistics. +Alisa Liu, Maarten Sap, Ximing Lu, Swabha Swayamdipta, Chandra Bhagavatula, Noah A. Smith, and Yejin Choi. 2021a. DExperts: Decoding-time controlled text generation with experts and anti-experts. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6691-6706, Online. Association for Computational Linguistics. +Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2021b. Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586. +Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142-150, Portland, + +Oregon, USA. Association for Computational Linguistics. +Florian Mai, Nikolaos Pappas, Ivan Montero, Noah A. Smith, and James Henderson. 2020. Plug and play autoencoders for conditional text generation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6076-6092, Online. Association for Computational Linguistics. +Fatemehsadat Mireshghallah, Kartik Goyal, and Taylor Berg-Kirkpatrick. 2022. Mix and match: Learning-free controllable text generation using energy language models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 401-415, Dublin, Ireland. Association for Computational Linguistics. +Sinno Jialin Pan, Xiaochuan Ni, Jian-Tao Sun, Qiang Yang, and Zheng Chen. 2010. Cross-domain sentiment classification via spectral feature alignment. In Proceedings of the 19th International Conference on World Wide Web, WWW '10, page 751-760, New York, NY, USA. Association for Computing Machinery. +Krishna Pillutla, Swabha Swayamdipta, Rowan Zellers, John Thickstun, Sean Welleck, Yejin Choi, and Zaid Harchaoui. 2021. Mauve: Measuring the gap between neural text and human text using divergence frontiers. In Advances in Neural Information Processing Systems, volume 34, pages 4816-4828. Curran Associates, Inc. +Jing Qian, Li Dong, Yelong Shen, Furu Wei, and Weizhu Chen. 2022. Controllable natural language generation with contrastive prefixes. In *Findings of the Association for Computational Linguistics: ACL* 2022, pages 2912-2924, Dublin, Ireland. Association for Computational Linguistics. +Lianhui Qin, Sean Welleck, Daniel Khashabi, and Yejin Choi. 2022. Cold decoding: Energy-based constrained text generation with langevin dynamics. arXiv preprint arXiv:2202.11705. +Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. +Chen Henry Wu, Saman Motamed, Shaunak Srivastava, and Fernando De la Torre. 2022. Generative visual prompt: Unifying distributional control of pre-trained generative models. arXiv preprint arXiv:2209.06970. +Kevin Yang and Dan Klein. 2021. FUDGE: Controlled text generation with future discriminators. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3511-3535, Online. Association for Computational Linguistics. + +Kexin Yang, Dayiheng Liu, Wenqiang Lei, Baosong Yang, Mingfeng Xue, Boxing Chen, and Jun Xie. 2022. Tailor: A prompt-based approach to attribute-based controlled text generation. arXiv preprint arXiv:2204.13362. +Dian Yu, Zhou Yu, and Kenji Sagae. 2021. Attribute alignment: Controlling text generation from pretrained language models. In *Findings of the Association for Computational Linguistics: EMNLP* 2021, pages 2251-2268, Punta Cana, Dominican Republic. Association for Computational Linguistics. +Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. 2022. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068. +Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Advances in Neural Information Processing Systems, volume 28. Curran Associates, Inc. +Daniel M. Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B. Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving. 2019. Fine-tuning language models from human preferences. arXiv preprint arXiv:1909.08593. + +# A Distribution of Attributes + +# A.1 Independent Projection of Attributes + +We project sample points to 2D by Principal Component Analysis, with each attribute projected independently. We display a scatter plot for each and perform the Gaussian kernel density estimation. The darker area denotes a higher probability, where more representation points of oracle sentences gather. And the region annotated by a red ellipse is the estimated distributional center. + +We underline distributions of attributes in Figures 5 to 7, including World, Sports, and Sci/Tech, which are significantly asymmetric. And especially, the projected distribution of the World attribute is even non-convex. This observation supports our hypothesis that the practical distributions of attributes are far more complex than symmetric distributions such as Gaussian distribution. + +![](images/d57078c7c0b705a6a5cfc7506d2fe2665b91c38c708bac80dc8dbe664facdb2d.jpg) +Figure 5: Distribution of World attribute from Topic aspect. + +![](images/4145445589c14c736edcffcef99fb127122dbc6f5784ef8f8435bfa8b0f62e3a.jpg) +Figure 6: Distribution of Sports attribute from Topic aspect. + +![](images/f9f8e2f6e91876f368a29b365f8f8ba1240ad94b0cf6990ce7f61b0f41a1479c.jpg) +Figure 7: Distribution of Sci/Tech attribute from Topic aspect. + +In addition, we plot projected distributions of other attributes in Figures 8 to 12. Attributes such as Positive and Negative seem roughly symmetric in 2D projection. However, we can not guarantee their symmetry in high-dimensional space. Because the PCA aims to identify directions along which the variation in the data is maximal. In other words, the direction selection strategy is not necessarily related to symmetry or asymmetry, which means these 2D symmetric distributions may be asymmetric in high-dimensional space, with the asymmetric directions ignored during projection. Worse still, the long-tail region for a skewed direction may be too sparse, leading to lower variation compared to symmetric directions. + +![](images/d8a1019365b16814b73fbf81d425d0c1e7423591a83baeba6d0e3e37c64aaf7e.jpg) +Figure 8: Distribution of Negative attribute from Sentiment aspect. + +![](images/e464e4b458a00f45c97bc5f2c05021d20ee97c35c2fa2915609f7b056ea96a7e.jpg) +Figure 9: Distribution of Positive attribute from Sentiment aspect. + +![](images/1ebcba303f142c6ba7fdad2198b88c825ca1e732b56b66337567bcb4670fa130.jpg) +Figure 10: Distribution of Business attribute from Topic aspect. + +![](images/1f7ac381c8866454f8cfb6ba8453204e66b47047d2243263c5c2c4501b4c815b.jpg) +Figure 12: Distribution of Non-toxic attribute from Detoxification aspect. + +![](images/34c9fdb090ad890b0505e60dd8c99190f6033f82a92838fba7f5f9da38506a28.jpg) +Figure 11: Distribution of Toxic attribute from Detoxification aspect. + +# A.2 Joint Projection of Attributes + +We project combined sample points of attributes from three different aspects jointly to 2D by PCA. We display a scatter plot for each combination in Figures 13 to 20. The intersection points calculated on baselines' interpolation strategy and our intersection searching algorithm are plotted with Baseline and Ours, respectively. From these figures, we observe that NonToxic can mainly cover two sentiment attributes or at least possess large intersection areas. Besides, the intersection areas among sentiment attributes and topic attributes, except for the Sci/Tech, are narrow and sparse. Compared with the baselines' strategy, our search algorithm is closer to the intersection area, especially on Negative and Business attributes in Figures 13 to 15 and 19. + +![](images/7ee149cab6537a0720a2ba2a1288e216229fea973fa981946ed9980a1a5009ff.jpg) +Figure 13: Jointly projected distributions of attributes: Negative, World, and NonToxic from aspects: Sentiment, Topic, and Detoxification, respectively. + +![](images/0c230e498247c7e184aec3f72bf71918b6c19b5f07b9d4d07657e44d4a6f985d.jpg) +Figure 14: Jointly projected distributions of attributes: Negative, Sports, and NonToxic from aspects: Sentiment, Topic, and Detoxification, respectively. + +![](images/39fe1d4a1551a649f18cef8b73272f71ca2fa19452886b467025945b5ea5dc97.jpg) +Figure 15: Jointly projected distributions of attributes: Negative, Business, and NonToxic from aspects: Sentiment, Topic, and Detoxification, respectively. + +![](images/ac3f721f9bd7c03f3713063fcd8ab7ff6640a3078d71367beb8880921e49a2ec.jpg) +Figure 16: Jointly projected distributions of attributes: Negative, Sci/Tech, and NonToxic from aspects: Sentiment, Topic, and Detoxification, respectively. + +![](images/da09ef8788ee097737ced633c3dcf4118c6246593c76409a21410c06dc978bd5.jpg) +Figure 17: Jointly projected distributions of attributes: Positive, World, and NonToxic from aspects: Sentiment, Topic, and Detoxification, respectively. + +![](images/b8e96f641e086db1e92ba19c9a8bd243b2e9b87a9cb132a9bed649da21169761.jpg) +Figure 18: Jointly projected distributions of attributes: Positive, Sports, and NonToxic from aspects: Sentiment, Topic, and Detoxification, respectively. + +![](images/716ad3da2b13ca1c041e633b47f2299e923fc0dc89c61f166511c0f1d3f77ffa.jpg) +Figure 19: Jointly projected distributions of attributes: Positive, Business, and NonToxic from aspects: Sentiment, Topic, and Detoxification, respectively. + +![](images/da86e6299181ad30a8e223e6ffae2e65d2292a9a40475a2f45d4b1c0eac96ba7.jpg) +Figure 20: Jointly projected distributions of attributes: Positive, Sci/Tech, and NonToxic from aspects: Sentiment, Topic, and Detoxification, respectively. + +![](images/80d6ffebcdbe84ed1985de47604da594bf89c7c4e253117270fce5845f7f6d4b.jpg) +A.3 Projection of Positive and Negative +Figure 21: Jointly projected distributions of Positive and Negative. + +Except for some noise in the dataset, positive and negative do not intersect when jointly projected. + +# B Hyperparameters and Details + +Our methods are implemented using the Hugging face Transformers package. Our encoder is initialized with Bert-base-uncased, and the fixed decoder uses GPT2-medium. For any sentence, it will be tokenized with WordPiece tokenizer from Bert and Byte-Pair Encoding tokenizer from GPT2 before input to encoder and decoder, respectively. We perform mean pooling on outputs of the encoder and convert them to 768-dimensional latent representations, which are points in our attribute space. Afterward, latent representations will be mapped to the prefix with a dimension of $20 \times 24 \times 2 \times 1024$ , where 20 is the prefix sequence length, 24 is the number of hidden layers in GPT2-medium, 2 represents one key and one value, and 1024 is the size of hidden states in GPT2-medium. It's worth noting that prefix length Contrastive Prefix uses for single-aspect control is 10 and for multi-aspect control is $10 \times$ number of aspects, which is 30 for three-aspect control. Our prefix length is fixed to 20, which has nothing to do with the scale of aspects. + +During the training stage, we use half-precision mode for efficiency on one NVIDIA A100 80GB GPU, where the batch size is 128 since the larger batch size better alleviates the aspect gap loss. In our setting, the random seed is $0$ , $w_{1} = 0.5$ , $w_{2} = 0.2$ , $w_{3} = 0.3$ , variation hyperparameter $\lambda$ is 1e-3, the optimizer is AdamW with a learning rate of 1e-4, the number of training epochs is 150, and we use a checkpoint at the + +step 30000. The training phase takes about 8 hours, and we experiment 6 times to search for the $\lambda \in [2\mathrm{e} - 3,1\mathrm{e} - 3,5\mathrm{e} - 4,1\mathrm{e} - 4,5\mathrm{e} - 5,1\mathrm{e} - 5]$ , while the other hyperparameters are initial settings. + +
CombinationWeight
Neg. & World & NonTox.2 : 7 : 1
Neg. & Sports & NonTox.2 : 4 : 1
Neg. & Business & NonTox.2 : 8 : 1
Neg. & Sci./Tech. & NonTox.3 : 1 : 3
Pos. & World & NonTox.2 : 12 : 1
Pos. & Sports & NonTox.3 : 5.5 : 1
Pos. & Business & NonTox.2 : 9 : 1
Pos. & Sci./Tech. & NonTox.3 : 1 : 1
+ +Table 5: Specialized Weight for Attribute Balance. + +During the inference phase, the maximum number of iterations $T$ is 15, the number of candidates $M$ is 1000, and the number of neighbors $K$ is 200. We utilize a specialized list of weight parameters for each combination of attributes in Table 5, which aims to balance the performance among attributes from different aspects. After the iteration of intersection searching, our strategy is first to select the top 10 candidates with the smallest distances to their neighbors as the final candidate set. Then we randomly choose a candidate from these ten as the intersection's representation for text generation diversity. Our text generation process is the same as prefix tuning with sequence length set to 50. Except for model and data loading, the entire evaluation process for each attribute combination, including intersection searching, text generation, and attribute-relevance evaluation, takes about 2 minutes. Therefore, we can manually tune the weight of attributes to balance them, with a maximum trial number of 8 for each weight. + +35 prompts we used in the inferencing stage are following the PPLM setting with 20 from its bag-of-word setting and 15 from its discriminator setting: + +- PPLM-Bow: "In summary", "This essay discusses", "Views on", "The connection", "Foundational to this is", "To review", "In brief", "An illustration of", "Furthermore", "The central theme", "To conclude", "The key aspect", "Prior to this", "Emphasised are", "To summarise", "The relationship", "More importantly", "It has been shown", "The issue focused on", "In this essay". +- PPLM-Discrim: "Once upon a time", "The book", "The chicken", "The city", "The country", "The horse", "The lake", "The last time", + +"The movie", "The painting", "The pizza", "The potato", "The president of the country", "The road", "The year is 1910". + +Detailed setting of baselines: (I) Weighted Decoding: For PPLM, we only retrain its classifier heads on our datasets while keeping all other original settings. For GeDi, We use its code directly since we are following its setting. (II) Multi-objective Optimization: MUCOCO provides a solution for custom classification constraints, and thus we train these classifiers on our datasets. Mix&Match is relatively complex as it can not generate long sentences from scratch with the mask language model Bert. Worse still, as a method based on sampling, it is somewhat dependent on initialization. Therefore, we use sentences generated by PPLM as the starting sentences and let Mix&Match slowly polish the text by itself in iterations. (III) Prefix-Tuning: We reproduce Contrastive Prefix7 and achieve comparable results. For a fair comparison, we unify the pre-trained language model to GPT2-medium (345M parameters) except for Mix&Match using Bert-large (340M parameters). + +# C Cases + +We illustrate cases of 8 different attribute combinations in Table 6. + +
AttributesModelsGeneration Results
Neg. World NonTox.PPLMFurthermore, the government will be allowed to carry out airstrikes from inside Afghanistan even after the end of Nato's withdrawal, according to senior defence officials. The UK is also planning to use air strikes in Afghanistan and Iraq from the skies over its bases.
GeDiFurthermore, the Iraqi authorities plan to seize petrol from Shia vehicles and trucks as well. Iraqi Foreign Minister Shaida al-Abadi said Iraq was ready for such actions if US President Donald Trump issued a military order ordering it.
MU COCOFurthermore, the new law will make it sale for any individual or entity to regulated keep give away Mumbai firearm to anyone a background check, including those who are prohibited from doing so under federal law.
Mix& MatchFurthermore, the uk will be reluctant to carry out airstrikes anywhere in europe until near the end of nato's mandate, according to some nato officials. the uk is likewise reluctant to drop warheads against iran and iraq from the air over british territory anywhere;
Prefix concatenateFurthermore, the first and the first of his world. The world.S. The U.S. The U. The world's country and a new-year.
Prefix semiFurthermore, a new survey conducted by a new survey of the Middle Eastern population in the country was revealed to be a very close match for the official record of the National Socialist Party (NTP) in the country.
OursFurthermore, the movie's main focus is getting rid of Robert Kennedy. This movie has no plot, no action and no even remotely decent characterizations. It's simply a glorified version of what happened to George Bush in 2004.
Neg. Sports NonTox.PPLMThis essay discusses the role of private security forces in Libya. The military's role in this crisis can be divided into two phases: 1) The first phase involved the transfer and transfer of the control of the situation to a military body.
GeDiThis essay discusses last season who was demoted away from the league and how his decline in playing time impacted the team as a whole. With detailed observations, analysis, stories provided by some of these players including Orlando City fullback Ben Sweat and Toronto.
MU COCOThis essay discusses how webstore can Consent a more humane society and how Mold willroximately the way webp topics our own Intake and our relationship with them. enoughWhat is a body)? awa A body is the transsexual-porn Franch structureglers glucobos
Mix& MatchThis essay from an official, who was investigating two suspected drug dealers,"failed to find any probable cause." he stated that "confusion reigned" as the two men "struggled for some time" while evans "continued throwing punches."
Prefix concatenateThis essay discusses the fact of the original Germanic tradition of a man's attempt to make a name on the English football team and the fact of the English football league.
Prefix semiThis essay discusses the fact that the NHL is not a national sport. It also provides a new perspective to the fact that the NHL is not a major league.
OursThis essay discusses how the Miami Heat lost to the Atlanta Hawks in a seven-minute overtime last night, and how they should never again be able to make it with their team mates.
Neg. Business NonTox.PPLMFoundational to this is the need for a national banking system for the purpose of financing the banking system. The Federal Reserve has already taken over this task by creating and controlling the money supply in the form of the Federal Reserve bank, which is now owned and operated by the Federal.
GeDiFoundational to this is the New York Int'l Fedal and Foreign Market Team. This practice includes facilitating contacts between two levels of financial institutions as necessary for a successful settlement of an equity investment transaction.
MU COCOFoundational to this is a Regulatory SPONSORED of community and debtor. not have a shared history of Recession bourgeois The struggles of Buffpeople, SPONSORED the struggles of all of our individuals bunk are interrelated,_ we are all part ze the same struggle_.
Mix& MatchFoundational to this is the Woodward foundation is our belief that low - income housing has a devastating effect on mental health. our research indicates that depressive symptoms are much greater among the economically weaker. we believe that a depressed patient's life expectancy will decline dramatically as a result.
Prefix concatenateFoundational to this is the company .n\n
Prefix semiFoundational to this is the company's ability to offer a full range of services to customers. The company has been able to offer a good product and service for years.
OursFoundational to this is a $1 million cut from the company's annual budget, which means it could be difficult to cash out of its debt.
Neg. Sci/Tech NonTox.PPLMThe last time I wrote a blog post about the future of Linux and how to prepare for it I wrote: I am very pleased to announce that the Linux kernel has been updated to 1.8.2! This is good news for the Linux.
GeDiThe last time we examined this topic we estimated that 5% of the age-appropriate weight was unnecessary due to red fatherful neglect; only 14% of adopted healthy, fit barrels were considered but maintained. Our data predict that at least 44lb.
Neg. +Sci/Tech +NonTox.MU +COCOThe last time I checked VMware would's not like the world's largest and fastest ship was a single- Anonlled organism that could fly, and the last time I checked, it's not like the world's largest and fastest ship the a single-
Mix& +MatchThe last time the lake was drained, however, to make way for a golf course at a nearby site, and there were reports on a nearby island that a large-scale groundwater cleaning program was beginning. the lake was drained, however, to make way elsewhere;
Prefix +concateThe last time you are a fan, you have to be aware of the bad news about the Internet. The problem is that this isn't even remotely acceptable to the author.
Prefix semiThe last time I see this film I have not seen it in my life. I have watched it in a very limited number of days and I have been very disappointed. The acting is terrible and the acting is bad.
OursThe last time I saw this film was in the theater. It was terribly disappointing. There is no plot or suspense whatsoever, nor any action whatsoever. The only thing that can be attributed to this movie is the lack of a sound system.
Pos. +World +NonTox.PPLMThe connection between obesity and autism has been identified for the first time using a unique antibody screening test, according to researchers with the University of Texas MD Anderson Cancer Center. They found that a protein called CD34 has a powerful impact on autism.
GeDiThe connection between Greece and Russia reached new heights through cooperation on a number of initiatives States Parties undertook joint action to crack down on corruption abroad. For instance, the Russian Federation launched an all-cash inquiry aimed at identifying persons.
MU +COCOThe connection between staking two is not a loneliness of mere coincidence. The connection snowball dividing matter of history, and history BW a history arresting its own, of which hero are all the victims analyse nogly I don't believe in coincidence Alger said the
Mix& +MatchThe connection is an illustration of the moon, from the book 'the lord of light and darkness', by william shakespeare ( photo courtesy of william shakespeare ). an illustration of the sun, from the book 'the lord of light and darkness', by william shakespeare ( photo courtesy)
Prefix +concateThe connection of the United States's the world of the world's first-year of a new-run of the world in the world.S.
Prefix semiThe connection between the world of the American National Rifle Association and the United States is a fascinating, fascinating, and hilarious tale. It has been an honor to see the film on the National Library shelves, and I am proud to see the film.
OursThe connection between John Lennon and the United States is as strong as ever. The Los Angeles Times reports that Lee Sternberg's performance of his song "Lenny Luerer" won a round of applause in the U.S. Senate.
Pos. +Sports +NonTox.PPLMMore importantly, the first day of the 2017 NFL Draft is always exciting to watch with fans, because the league is going to get a lot of great talent on defense in the coming weeks. The biggest draft prospect to come out this year, Alabama DT Vic Beasley
GeDiMore importantly, I appreciated his honesty along the way. Orlando Pace is usually a shadow of his former self, but he understood the importance of all that went into this win and smiled again.
MU +COCOMore importantly, he was able to defenders it work. it two men Fans in likeness fans on a Rugby coach. He had justovich from medical trip that Europe and was looking partners a place to eat. loved had never been in a bar
Mix& +MatchMore importantly, the sixth game of the 2018 stanley cup finals presents a new challenge and an exciting new opportunity. the philadelphia flyers and pittsburgh penguins are joining forces for a six-game road trip that begins in the nation's capital each weekend.
Prefix +concateMore importantly, I have to remind everyone that this is a real story, so the fact that the two men were not a couple of people who have to be treated as one of those who would be involved with the team.
Prefix semiMore importantly, the Boston Red Sox have lost the league title, and the players themselves are not yet qualified to be the best player in the league. The fact that they are not even qualified to play a match of the best.
OursMore importantly, the Houston Astros won a great opportunity to make a comeback with a victory over the Detroit Tigers in the National League West. The team has an outstanding offensive line and is tied for fifth in scoring among the nation.
Pos. +Business +NonTox.PPLMIn brief, the federal tax law allows employers to deduct up to 20% of compensation expenses from workers' paychecks. This deduction is a big deal because many employees have to pay high deductibles for medical care.
GeDiIn brief, Heiltsuk said that she holds central, shared concerns regarding how First Nations youth can navigate financial injustices faced by society and why net aboriginal debt was surpassed in 2015. All eight First Nations elected delegates at Monday's meeting
MU +COCOIn brief, Bach "sus anthologies pione excel the outstanding Russian Returns in the hacking of capitalists Economics Committee letters were not whirlwind. But that's not what the White Airways said in statement Alibaba late Tuesday afternoon Special Orderable
Mix& +MatchIn brief, the u. s. department of agriculture ( usda ) produced a comprehensive list of how many jobs were created in 2016, it identified 3. 1 million jobs in the agriculture sector, a dramatic uplift from 2015's 2 million.
Prefix +concateIn brief, the new of its company .n\n
Pos. Business NonTox.Prefix semiIn brief, it is the best movie I have ever seen, and I love it. The movie is a perfect blend of comedy and comedy. It is not a classic movie, but it is not a great movie.
OursIn brief, the economy is surged in July, boosted by strong sales of oil and other products, as well as strong growth in U.S. manufacturing.
Pos. Sci/Tech NonTox.PPLMThe country's first solar power system, built by a group of students at Harvard University, is now operating. The project is aimed at encouraging solar energy development by encouraging collaboration among universities, community groups and individuals.
GeDiThe country illustrated beautifully reflects the complexity of lives and customs.
MU COCOThe country's top diplomat, Blockchain Lavrov IBM said the UydiaS. was "very much looking into" the matter. pleasantly engineers Rapp a Bridges supplier of hacker vegan Iran, has been trying to improve ties with blockchain, a close ally and
Mix& MatchThe country focuses on the role the united states has played in discovering new technologies for the advancement of science, according to two u. s. officials briefed on - site. both officials, newly appointed to handle national security matters welcomed the sensitive nature of the investigation.
Prefix concatenateThe country's top TV channel is now a very popular TV show. The only thing is the name. I'm sure there are many people who would be willing to take it seriously, but I'll be damned to find out if they have a lot
Prefix semiThe country's most famous TV series is the best and most powerful show ever made. The story is great, the action is good, the plot is great, and the story is very good. The cast is great.
OursThe country's biggest television network has announced that it will offer a new version of the movie which is based upon the popular "Star Trek" series. It's truly amazing to see how many people are involved in making this movie so far.
+ +Table 6: Generated Cases. Red highlights the sentiment-related content. Blue highlights the topic-related content. Underlined are the input prompts. Strikethrough indicates toxic content. + +# D Detailed Results + +
MethodsSentiment (%)Topic (%)Detox. (%)
Neg.Pos.WorldSportsBusinessSci./Tech.
Weighted Decoding Based Methods
PPLM single-aspect97.262.774.946.562.498.693.2
PPLM92.2-75.4---82.0
84.4--41.8--76.0
87.5---61.5-82.9
85.3----95.076.2
-35.459.1---90.4
-39.5-34.1--89.5
-40.9--48.3-91.2
-52.7---93.191.3
average87.442.167.338.054.994.184.9
GeDi single-aspect93.970.773.485.775.798.094.9
GeDi94.7-80.0---90.6
84.2--74.8--93.9
94.9---75.7-96.6
90.6----80.192.8
-53.761.4---94.4
-60.5-74.3--95.2
-57.6--54.3-95.7
-72.3---90.294.2
average91.161.070.774.665.085.294.2
Multi-Objective Optimization Based Methods
MUCOCO97.9-54.5---85.7
94.6--55.8--85.7
96.8---65.6-87.3
95.5----96.186.9
-30.448.0---91.0
-26.3-59.8--92.6
-34.6--62.1-93.8
-43.9---95.193.1
average96.233.851.357.863.995.689.5
Mix&Match single-aspect99.263.379.557.469.699.396.9
Mix&Match96.1-80.6---93.1
97.7--48.2--93.0
98.2---66.6-97.0
96.8----99.696.1
-53.067.3---95.5
-45.0-44.0--96.7
-41.5--55.8-97.7
-59.7---97.397.5
average97.249.874.046.161.298.595.8
Prefix-Tuning Based Methods
Prefix single-aspect88.490.674.585.393.593.693.8
Contrastive Prefix concatenation32.4-50.3---90.9
88.1--73.8--89.1
51.6---70.0-94.1
94.3----94.188.3
-77.646.8---92.2
-70.2-78.5--95.9
-51.9--73.1-94.7
-72.0---88.195.6
average66.667.948.576.271.691.192.6
Contrastive Prefix semi-supervised65.5-80.6---91.8
67.2--90.3--92.5
56.0---79.2-92.2
90.0----93.384.8
-93.564.8---95.1
-41.8-78.5--94.8
-87.4--41.7-95.2
-93.6---86.795.3
average69.779.172.784.460.590.092.7
69.7-71.7---84.1
78.6--80.0--80.2
99.9---96.7-96.8
92.8----98.081.7
-80.558.0---95.1
-84.7-86.6--94.5
-87.6--91.7-98.1
-99.7---96.195.4
average85.388.164.983.394.296.890.7
Ours64.3-51.8---90.1
71.5--71.0--93.4
68.2---59.9-95.7
62.4----99.896.0
-92.060.6---97.6
-59.4-93.8--94.3
-86.8--72.1-97.9
-68.3---98.497.2
average66.676.656.282.466.099.195.3
Ours99.2-15.2---96.5
99.8--36.5--96.3
97.8---17.9-95.4
84.9----97.795.6
-3.214.4---96.3
-0.1-40.4--96.0
-1.3--13.9-95.7
-6.5---97.795.8
average95.45.614.838.515.997.796.0
+ +Table 7: Detailed Combination Results on Multi-Aspect Control. \ No newline at end of file diff --git a/adistributionallensformultiaspectcontrollabletextgeneration/images.zip b/adistributionallensformultiaspectcontrollabletextgeneration/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..3fba9c52ab4b4ac9fb8c43f59e00a4f633eb5895 --- /dev/null +++ b/adistributionallensformultiaspectcontrollabletextgeneration/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d0a5eee3c9e568ac4d29767ba089d15ae15dffb236b3312943f7ad542b4ee6f7 +size 2763000 diff --git a/adistributionallensformultiaspectcontrollabletextgeneration/layout.json b/adistributionallensformultiaspectcontrollabletextgeneration/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..2052347a5fd571cc9cf9d561d1c5e61be557f57e --- /dev/null +++ b/adistributionallensformultiaspectcontrollabletextgeneration/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8c48d1b80245522cb9888d8182692e33aa17050a5f563a762c2590575510491b +size 523800 diff --git a/adversarialconcepterasureinkernelspace/5a45641d-38d6-4154-b00d-ccd06b9be13f_content_list.json b/adversarialconcepterasureinkernelspace/5a45641d-38d6-4154-b00d-ccd06b9be13f_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..35e2d5e1d34f7a2c7e0a45b42ab18578e5c8fdc8 --- /dev/null +++ b/adversarialconcepterasureinkernelspace/5a45641d-38d6-4154-b00d-ccd06b9be13f_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7f094a551c0a04a0f0d3e2aafaff42c263e69214a0ec3c008619ed5ad890a5a1 +size 149847 diff --git a/adversarialconcepterasureinkernelspace/5a45641d-38d6-4154-b00d-ccd06b9be13f_model.json b/adversarialconcepterasureinkernelspace/5a45641d-38d6-4154-b00d-ccd06b9be13f_model.json new file mode 100644 index 0000000000000000000000000000000000000000..a07fa9bf803665f8b63ed6895bf29a294624fd10 --- /dev/null +++ b/adversarialconcepterasureinkernelspace/5a45641d-38d6-4154-b00d-ccd06b9be13f_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bce70bc64ecfcadf7f75ef2d0e378bb63b745194677d3c2d513bd09bf21f9473 +size 174276 diff --git a/adversarialconcepterasureinkernelspace/5a45641d-38d6-4154-b00d-ccd06b9be13f_origin.pdf b/adversarialconcepterasureinkernelspace/5a45641d-38d6-4154-b00d-ccd06b9be13f_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..7a4b9cc07c08d2e384895fa12b317e2479c64ae3 --- /dev/null +++ b/adversarialconcepterasureinkernelspace/5a45641d-38d6-4154-b00d-ccd06b9be13f_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a97fd29e8470b28e62f64d982be47c14d4ae7050c452d060333b3063287454d4 +size 503258 diff --git a/adversarialconcepterasureinkernelspace/full.md b/adversarialconcepterasureinkernelspace/full.md new file mode 100644 index 0000000000000000000000000000000000000000..5284c3d88fb8ad3512e9bcf27725b645f5cdfe3e --- /dev/null +++ b/adversarialconcepterasureinkernelspace/full.md @@ -0,0 +1,531 @@ +# Kernelized Concept Erasure + +Shauli Ravfogel $^{1,2}$ Francisco Vargas $^{3}$ Yoav Goldberg $^{1,2}$ Ryan Cotterell $^{4}$ + +$^{1}$ Bar-Ilan University $^{2}$ Allen Institute for Artificial Intelligence + +3University of Cambridge 4ETH Zürich + +{shauli.ravfogel, yoav.goldberg}@gmail.com + +fav25@cam.ac.uk ryan.cotterell@inf.ethz.ch + +# Abstract + +The representation space of neural models for textual data emerges in an unsupervised manner during training. Understanding how those representations encode human-interpretable concepts is a fundamental problem. One prominent approach for the identification of concepts in neural representations is searching for a linear subspace whose erasure prevents the prediction of the concept from the representations. However, while many linear erasure algorithms are tractable and interpretable, neural networks do not necessarily represent concepts in a linear manner. To identify non-linearly encoded concepts, we propose a kernelization of a linear minimax game for concept erasure. We demonstrate that it is possible to prevent specific nonlinear adversaries from predicting the concept. However, the protection does not transfer to different nonlinear adversaries. Therefore, exhaustively erasing a non-linearly encoded concept remains an open problem. + +![](images/7179d4f69aaa79d170ce1162adc15063cd067c5b635d0cf5e0b04a3cf8ab6ed4.jpg) + +https://github.com/shauli-ravfogel/adv-kernel-removal + +# 1 Introduction + +Large neural networks in NLP produce real-valued representations that encode the bit of human language that they were trained on, e.g., words, sentences, or text grounded in images. For instance, GloVe (Pennington et al., 2014) produces real-valued representations of isolated words, BERT (Devlin et al., 2019) produces real-valued representations of sentences, and VilBERT produces real-valued representations of visually grounded language (Lu et al., 2019; Bugliarello et al., 2021). These real-valued representations naturally encode various properties of the objects they represent. For instance, a good representation of the first author's laptop computer ought to encode its manufacturer, size, and color somewhere among its real values. + +We now describe the premise of our paper in more detail. We adopt the notion of a concept due to Gärdenfors (2000). For Gärdenfors, objects can be thought of as having a geometric representation. + +Different dimensions in the representation space that objects inhabit might correspond to their color, size, and shape. Gärdenfors then goes further and defines a concept as a convex region of the representation space. $^{1}$ Building on Gärdenfors' notion of a concept, this paper studies a task that we refer to as concept erasure. + +We now motivate the task more formally by extending the example. Imagine we have $N$ different laptops, whose real-valued representations are denoted as $\mathbf{x}_1,\ldots ,\mathbf{x}_N$ . Now, consider concept labels $y_{1},\ldots ,y_{N}$ that encode each laptop's color and are taken from the set $\{\mathrm{GREY,SILVER,BLACK,WHITE}\}$ . In the concept erasure paradigm, we seek an erasure function $r(\cdot)$ such that $r(\mathbf{x}_1),\dots,r(\mathbf{x}_N)$ are no longer predictive of the colors of the laptops, $y_{1},\ldots ,y_{N}$ but retain all the other information encoded in the original representations, i.e., they remain predictive with respect to other laptop-related concepts. Our hope is that the geometry of the erasure function $r(\cdot)$ then tells us the structure of the laptop color concept. + +Concept erasure is tightly related to concept identification. Once we have successfully removed a given concept, e.g., color, from a representation $\mathbf{x}_n$ , it is reasonable to argue that the erasure function $r$ has meaningfully identified the concept within the representation space. In the rest of the paper, we will say that $r(\cdot)$ neutralizes the concept in the representation space. For instance, we say that the $r$ in our example neutralizes the concept of a laptop's color. It follows that concept identification is related to bias mitigation (Bolukbasi et al., 2016; Gonen and Goldberg, 2019; Maudslay et al., 2019), e.g., one may want to identify and remove the gender encoded in learned word representations produced by an embedding method such as word2vec (Mikolov et al., 2013) or GloVe (Pennington et al., 2014). Indeed, the empirical portion of this paper will focus on removing gender + +from word representations in order to mitigate bias. + +Previous work on concept erasure (Ravfogel et al., 2021) focuses on the linear case, i.e., where $r$ is a linear function. While linear concept erasure methods have certainly found success (Bolukbasi et al., 2016), there is no $a$ -priori reason to suspect that neural networks encode concepts in a linear manner. In this work, we take the first step toward the goal of identifying a non-linear function $r(\cdot)$ and a corresponding non-linearly encoded concept subspace. We directly build on Ravfogel et al. (2022), who cast linear concept erasure as a minimax game. Under their formulation, the function $r(\cdot)$ learns to remove the concept, while an adversary tries to predict the concept. We extend their work by deriving a class of general minimax games based on kernelization that largely maintains the tractability of the linear approach. Our kernelized method performs concept erasure in a reproducing kernel Hilbert space, which may have a much higher dimensionality (Scholkopf and Smola, 2002) and correspond to a non-linear subspace of the original representation space. + +Empirically, we experiment with gender erasure from GloVe and BERT representations. We show that a kernelized adversary can classify the gender of the representations with over $99\%$ accuracy if $r(\cdot)$ is taken to be a linear function. This gives us concrete evidence that gender is indeed encoded non-linearly in the representations. We further find that solving our kernelized minimax game yields an erasure function $r(\cdot)$ that protects against an adversary that shares the same kernel. However, we also find that it is difficult to protect against all kernelized adversaries at once: Information removed by one kernel type can be recovered by adversaries using other kernel types. That is, the gender concept is not exclusively encoded in a space that corresponds to any one kernel. This suggests that non-linear concept erasure is very much an open problem. + +# 2 Linear Concept Erasure + +We provide an overview of the linear minimax formulation before we introduce its kernelization. + +# 2.1 Notation + +Let $\mathcal{D} = \{(y_n,\mathbf{x}_n)\}_{n = 1}^N$ be a dataset of $N$ concept-representation pairs, where the labels $y_{n}$ represent the concept to be neutralized. The goal of linear concept erasure is to learn a linear erasure function + +$r(\cdot)$ from $\mathcal{D}$ such that it is impossible to predict $y_{n}$ from the modified representations $r(\mathbf{x}_n)$ . We focus on classification, where the concept labels $y_{n}$ are derived from a finite set $\{1,\dots ,V\}$ of $V$ discrete values, and the representations $\mathbf{x}_n\in \mathbb{R}^D$ are $D$ -dimensional real column vectors. To predict the concepts labels $y_{n}$ from the representations $\mathbf{x}_n$ , we make use of (and later kernelize) classifiers that are linear models, i.e., classifiers of the form $\widehat{y}_n = \pmb{\theta}^\top \mathbf{x}_n$ where $\pmb {\theta}\in \Theta \subseteq \mathbb{R}^{D}$ is a column vector of parameters that lives in a space $\Theta$ . We also consider an arbitrary loss function $\ell (\cdot ,\cdot)\geq 0$ where $\ell (y_n,\widehat{y}_n)$ tells us how close the prediction $\widehat{y}_n$ is to $y_{n}$ . + +Using this notation, linear concept erasure is realized by identifying a linear concept subspace whose neutralization, achieved through an orthogonal projection matrix, prevents the classifier from predicting the concept. We define $\mathcal{P}_k$ as the set of all $D\times D$ orthogonal projection matrices that neutralize a rank $k$ subspace. More formally, we have that $P\in \mathcal{P}_k\leftrightarrow P = I_D - W^\top W,W\in \mathbb{R}^{k\times D},WW^\top = I_k$ , where $I_{k}$ denotes the $k\times k$ identity matrix and $I_{D}$ denotes the $D\times D$ identity matrix. We say that the matrix $P$ neutralizes the $k$ -dimensional rowspace of $W$ + +# 2.2 A Linear Minimax Game + +Following this formalization, it is natural to define a minimax game (Neumann and Morgenstern, 1944) between a projection matrix $P \in \mathcal{P}_k$ that aims to remove the concept subspace, and a linear model parameterized by $\theta$ that aims to recover it: + +$$ +\min _ {\boldsymbol {\theta} \in \Theta} \max _ {P \in \mathcal {P} _ {k}} \sum_ {n = 1} ^ {N} \ell \left(y _ {n}, \boldsymbol {\theta} ^ {\top} P \mathbf {x} _ {n}\right) \tag {1} +$$ + +This is a special case of the general adversarial framework (Goodfellow et al., 2014). However, in our case, the predictor $\pmb{\theta}$ does not interact with the original input $\mathbf{x}_n$ . Instead, the classifier attempts to predict the concept label $y_{n}$ from $P\mathbf{x}_n$ . We now give a concrete example of the linear concept: When we have a binary logistic loss $(V = 2)$ and a $k$ -dimensional neutralized subspace, the game takes the following form: + +$$ +\min _ {\boldsymbol {\theta} \in \Theta} \max _ {P \in \mathcal {P} _ {k}} \sum_ {n = 1} ^ {N} y _ {n} \log \frac {\exp \boldsymbol {\theta} ^ {\top} P \mathbf {x} _ {n}}{1 + \exp \boldsymbol {\theta} ^ {\top} P \mathbf {x} _ {n}} \tag {2} +$$ + +In this paper, we focus on $k = 1$ , i.e., we aim + +to identify a 1-dimensional concept subspace. The optimization over $\mathcal{P}_k$ in Eq. (1) renders the game non-convex. To address this issue, Ravfogel et al. (2022) propose a concave relaxation of the objective: + +$$ +\min _ {\boldsymbol {\theta} \in \Theta} \max _ {P \in \mathcal {F} _ {k}} \sum_ {n = 1} ^ {N} \ell \left(y _ {n}, \boldsymbol {\theta} ^ {\top} P \mathbf {x} _ {n}\right) \tag {3} +$$ + +where the relaxation is shown in gray. In words, instead of optimizing over rank- $k$ projection matrices, a non-convex set, we optimize over its convex hull, the Fantope (Boyd and Vandenberghe, 2014): + +$$ +\mathcal {F} _ {k} = \left\{A \in \mathcal {S} ^ {D} \mid I _ {D} \succcurlyeq A \succcurlyeq 0, \operatorname {t r} (A) = k \right\} \tag {4} +$$ + +# 3 Non-linear Concept Erasure + +It has often been shown that human-interpretable concepts, and in particular gender, are encoded nonlinearly in the representation space (Gonen and Goldberg, 2019; Ravfogel et al., 2020). However, prior work on concept erasure, e.g., the method discussed in §2, assumes that concepts are encoded linearly. Our goal is to extend the game defined in Eq. (3) to be able to neutralize non-linearly encoded concepts while preserving the relative tractability and interpretability of the linear methods. A natural manner through which we can achieve these goals is by kernelization (Shawe-Taylor and Cristianini, 2004; Hofmann et al., 2008). + +The underlying assumption motivating kernel methods is that the features needed for the task live in a reproducing kernel Hilbert space (RKHS; Aronszajn, 1950). At an intuitive level, an RKHS allows us to extend some of results from linear algebra to potentially infinite-dimensional vector spaces (Canu and Smola, 2006). The main technical contribution of this paper is the derivation of a kernelized version of the linear adversarial game presented in Eq. (3). We perform this derivation in this section after providing some background on kernel methods. We show that the resulting kernelized game is both non-convex and too computationally heavy to solve directly. Thus, we introduce a Nyström approximation that results in an efficient and light-weight formulation, given in Eq. (10), that is still able to isolate concepts encoded non-linearly. + +# 3.1 Background: Kernel Methods + +Kernel methods are based on reproducing kernel Hilbert spaces (RKHS). Without going into the technical details, an RKHS is a space of "nice" functions equipped with a kernel (Yosida, 2012). A kernel $\kappa(\cdot, \cdot) \geq 0$ is a similarity measure that generalizes a positive definite matrix. When we have a kernel over an RKHS, then the kernel corresponds to the dot product of a feature map, i.e., $\kappa(\mathbf{x}, \mathbf{y}) = \Phi(\mathbf{x})^\top \Phi(\mathbf{y})$ . This insight gives us a natural manner to construct kernels. For instance, the linear kernel $\kappa(\mathbf{x}, \mathbf{y}) = \mathbf{x}^\top \mathbf{y}$ corresponds to the standard dot product in Euclidean space. The degree-2 polynomial kernel $\kappa(\mathbf{x}, \mathbf{y}) = (\gamma \mathbf{x}^\top \mathbf{y} + \alpha)^2$ corresponds to a dot product in a six-dimensional feature space if $\mathbf{x}, \mathbf{y} \in \mathbb{R}^2$ . + +Kernels are more general than positive definite matrices in that they can exist in infinite-dimensional spaces. For example, the Gaussian kernel $\exp \left(-\gamma ||\mathbf{x} - \mathbf{y}||_2^2\right)$ is infinite-dimensional. However, for any finite set of points $\{\mathbf{x}_1,\dots ,\mathbf{x}_N\}$ , we can construct a Gram matrix $K\in \mathbb{R}^{N\times N}$ where $K_{nm} = \kappa (\mathbf{x}_n,\mathbf{x}_m)$ encodes the similarity between $\mathbf{x}_n$ and $\mathbf{x}_m$ . The matrix $K$ is guaranteed to be positive definite. Kernels are useful because they allow us to implicitly learn functions in a potentially infinite-dimensional RKHS without materializing that space. + +# 3.2 A Kernelized Minimax Game + +Inspection of the linear adversarial game in Eq. (1) reveals that both the adversary and the predictor interact with the input only via an inner product. Thus, the game can be kernelized by replacing the inner product with a kernel operation. We establish this kernelization by first proving the following representative theorem-like lemma, which shows that $w, \theta$ can be written in terms of spans of the projected training set. + +Lemma 1. (Minimax Game Representer Theorem) Let $\mathcal{H}$ be a reproducing kernel Hilbert space with canonical feature map $\Phi : \mathbb{R}^D \to \mathcal{H}$ , i.e., $\Phi(\mathbf{x}) = \kappa(\mathbf{x}, \cdot)$ . Consider the game: + +$$ +\max _ {\boldsymbol {w} \in \mathcal {H}} \min _ {\boldsymbol {\theta} \in \mathcal {H}} \sum_ {n = 1} ^ {N} \ell \left(y _ {n}, \langle \boldsymbol {\theta}, \mathrm {P} _ {\boldsymbol {w}} ^ {\perp} \boldsymbol {\Phi} (\mathbf {x} _ {n}) \rangle\right) \tag {5} +$$ + +where $\mathrm{P}_{\boldsymbol{w}}^{\perp}$ is the operator that projects onto the orthogonal complement of $\boldsymbol{w}$ . For every attained + +local optimum $\pmb{\theta}^{*}$ , $\pmb{w}^{*}$ of Eq. (5), there is another local optimum $\pmb{\theta}_{U}^{*}$ , $\pmb{w}_{U}^{*}$ with the same value as $\pmb{\theta}^{*}$ , $\pmb{w}^{*}$ in $U \stackrel{\mathrm{def}}{=} \operatorname{span}\left\{\Phi(\mathbf{x}_1), \ldots, \Phi(\mathbf{x}_N)\right\}$ , the span of the training data. + +Proof. See App. A.1 for the proof. + +Having expressed $\boldsymbol{w}, \boldsymbol{\theta}$ as a function of the training data, we can proceed to proving the kernelization of the adversarial game. + +Lemma 2. Let $\mathcal{H}$ be a reproducing kernel Hilbert space with canonical feature map $\Phi$ , and let $\Phi(\mathbf{z})$ be a point in $\mathcal{H}$ . Next, let $\mathbf{w} = \sum_{n=1}^{N} \alpha_n \Phi(\mathbf{x}_n)$ and $\theta = \sum_{n=1}^{N} \beta_n \Phi(\mathbf{x}_n)$ be points in the reproducing kernel Hilbert space. Now, let $\Phi_{proj}(\mathbf{z})$ be the orthogonal projection of $\Phi(\mathbf{z})$ onto the orthogonal complement of the subspace spanned by $\mathbf{w}$ . Then, we have: + +$$ +\begin{array}{l} \langle \boldsymbol {\theta}, \Phi_ {p r o j} (\mathbf {z}) \rangle = \sum_ {m = 1} ^ {N} \beta_ {m} \left(\kappa \left(\mathbf {x} _ {m}, \mathbf {z}\right) \right. \\ \left. - \frac {\boldsymbol {\alpha} ^ {\top} K ^ {(m)} (\mathbf {z}) \boldsymbol {\alpha}}{\boldsymbol {\alpha} ^ {\top} K \boldsymbol {\alpha}}\right) \tag {6} \\ \end{array} +$$ + +where $K_{ij}^{(m)}(\mathbf{z})\stackrel {\mathrm{def}}{=}\kappa (\mathbf{x}_i,\mathbf{z})\kappa (\mathbf{x}_m,\mathbf{x}_j)$ + +Proof. See App. A.2 for the proof. + +Lemma 2 suggests the following form of the kernelized game given in Eq. (5): + +$$ +\begin{array}{l} \min _ {\boldsymbol {\beta} \in \mathbb {R} ^ {N}} \max _ {\boldsymbol {\alpha} \in \mathbb {R} ^ {N}} \sum_ {n = 1} ^ {N} \ell \left(y _ {n}, \sum_ {m = 1} ^ {N} \beta_ {m} \left(\kappa \left(\mathbf {x} _ {m}, \mathbf {z} _ {n}\right) \right. \right. \\ \left. - \frac {\boldsymbol {\alpha} ^ {\top} K ^ {(m , n)} \boldsymbol {\alpha}}{\boldsymbol {\alpha} ^ {\top} K \boldsymbol {\alpha}}\right) \Bigg) (7) \\ \end{array} +$$ + +where we define $K_{ij}^{(m,n)}(\mathbf{z})\stackrel {\mathrm{def}}{=}\kappa (\mathbf{x}_i,\mathbf{z}_n)\kappa (\mathbf{x}_m,\mathbf{x}_j)$ In contrast to Eq. (5), all computations in Eq. (7) are in Euclidean space. + +Theorem 1. The reproducing kernel Hilbert space game Eq. (5) attains the same local optima as Eq. (7). + +Proof. First, plug the result of Lemma 2 into Eq. (5). Optimality follows by Lemma 1, the representative lemma. + +Now, we turn to the runtime of the game. + +Proposition 1. The objective in Eq. (7) can be computed in $\mathcal{O}(N^4)$ time. + +Proof. Assuming that $\kappa(\cdot, \cdot)$ may be computed in $\mathcal{O}(1)$ , computing $K^{(m,n)}$ takes $\mathcal{O}(N^2)$ time. We have to do $\mathcal{O}(N^2)$ such computations, which results in $\mathcal{O}(N^4)$ time. Note that $K$ may be pre-computed once in $\mathcal{O}(N^2)$ time. Thus, $K^{(m,n)}$ is the bottleneck, so the whole algorithm takes $\mathcal{O}(N^4)$ time. + +There are two problems with naively using the formulation in Eq. (7). First, and similarly to Eq. (1), the problem is not convex-concave due to the optimization over $\alpha$ , which implicitly defines an orthogonal projection matrix of rank 1. Second, the evaluation time of Eq. (7) is $\mathcal{O}\left(N^4\right)$ , as argued in Proposition 1, which makes using a training set larger than a few hundred examples infeasible. We solve both of these computational issues with the Nyström approximation. + +# 3.3 The Nyström Approximation + +The general idea behind the Nyström method (Nyström, 1930) is to calculate a low-rank approximation of the kernel matrix. It is a commonly used technique for improving the runtime of kernel methods (Williams and Seeger, 2000). + +# 3.3.1 Convexifying the Objective + +Consider the Gram matrix $K \in \mathbb{R}^{N \times N}$ where $K_{nm} = \kappa(\mathbf{x}_n, \mathbf{x}_m)$ , $\mathbf{x}_n$ is the $n^{\text{th}}$ training representation, and $\mathbf{x}_m$ is the $m^{\text{th}}$ training representation. We start with the eigendecomposition of $K = U\Sigma U^\top = U\sqrt{\Sigma}\sqrt{\Sigma}U^\top$ , which can be computed in $\mathcal{O}(N^3)$ time (Golub and Van Loan, 2013). We are justified in taking the square root of $\Sigma$ because $K$ is necessarily positive definite. Now, we define an approximate feature map for observations $\mathbf{x}_n$ in the training data using the eigenvalues and vectors: + +$$ +\widetilde {\Phi} (\mathbf {x} _ {n}) \stackrel {\text {d e f}} {=} (U \sqrt {\Sigma}) _ {n} \tag {8} +$$ + +To compute the features for representations $\mathbf{x}$ not in the training data, we use the following: + +$$ +\widetilde {\Phi} (\mathbf {x}) \stackrel {\text {d e f}} {=} \sum_ {n = 1} ^ {N} \kappa (\mathbf {x}, \mathbf {x} _ {n}) \widetilde {\Phi} (\mathbf {x} _ {n}) \tag {9} +$$ + +which is an average, weighted by the kernel $\kappa(\cdot, \cdot)$ , of the features obtained during training. Plugging Eq. (8) into Eq. (3) yields: + +$$ +\min _ {\boldsymbol {\theta} \in \Theta} \max _ {P \in \mathcal {F} _ {k}} \sum_ {n = 1} ^ {N} \ell \left(y _ {n}, \langle \boldsymbol {\theta}, P \widetilde {\Phi} (\mathbf {x} _ {n}) \rangle\right) \tag {10} +$$ + +which is identical to the linear game in Eq. (3), except that it uses the transformed features $\widetilde{\Phi} (\mathbf{x}_n)$ to approximate the true feature map $\Phi$ + +Importantly, the game in Eq. (3) is convex-concave, so the Nyström approximation allows us to derive a kernelized convex-concave game in Eq. (10). The runtime of a gradient-based optimization procedure for this game over $T$ epochs is now $\mathcal{O}\left(TN^2 + N^3\right)$ . While this is an improvement over $\mathcal{O}\left(TN^4\right)$ , it is still not fast enough to be of practical use. + +# 3.3.2 Improving the Runtime + +The runtime bottleneck of the game in Eq. (10) is the $\mathcal{O}\left(N^3\right)$ time it takes to compute the eigendecomposition of the Gram matrix $K$ . Under the assumption that $\mathrm{rank}(K) = L$ , we can improve this bound to $\mathcal{O}\left(L^3 + L^2 N\right)$ (Drineas and Mahoney, 2005). For the case that $L \ll N$ , this is a substantial improvement. Moreover, it implies that the approximation feature map $\widetilde{\Phi}(\mathbf{x}_n) \in \mathbb{R}^L$ . After $T$ steps of optimization, the runtime is now $\mathcal{O}\left(TL^2 + L^3 + L^2 N\right)$ , which is fast enough to be useful in practice. A natural question to ask is what happens if we apply the Nyström approximation, thereby assuming $\mathrm{rank}(K) = L$ , but in practice $\mathrm{rank}(K) > L$ ? In this case, we are effectively computing a low-rank approximation of the kernel matrix. Several bounds on the accuracy of this approximation have been proven in the literature (Drineas and Mahoney, 2005; Jin et al., 2013; Nemtsov et al., 2016); we refer the reader to these works for more details on the approximation error. + +# 3.4 Pre-image Mapping + +After solving the game in Eq. (10), we obtain a projection matrix $P$ that neutralizes the concept inside an (approximated) RKHS. In other words, we have a function $r(\cdot)$ that prevents a classifier from predicting a concept label from the representation $\widetilde{\Phi}(\mathbf{x}_n)$ . However, for many applications, we want a version of the input $\mathbf{x}_n$ in the original space with the concept neutralized, i.e., we want $r(\mathbf{x}_n)$ . Neutralization in the original space requires solving the pre-image problem (Mika et al., 1998). In the case of Nyström features, we seek a mapping $P\widetilde{\Phi}(\mathbf{x}) \mapsto \mathbf{x}$ , which is a mapping from $\mathbb{R}^L$ to $\mathbb{R}^D$ , that projects the neutralized features back into the input space. + +In practice, this task can also be performed via a mapping $\mathbf{x} \mapsto \mathbf{x}$ from $\mathbb{R}^D \mapsto \mathbb{R}^D$ , which learns to reproduce in the input space the transformation that $P$ performs in the RKHS. We choose the latter approach, and train a multilayer perceptron (MLP) $f_{\lambda}(\cdot): \mathbb{R}^{D} \to \mathbb{R}^{D}$ . To estimate the parameters $\lambda$ of the MLP $f_{\lambda}(\cdot)$ , we optimize a two-termed objective over all points $\mathbf{x}_n$ : + +$$ +\begin{array}{l} \underset {\lambda \in \Lambda} {\operatorname {a r g m i n}} \left| \left| P \widetilde {\Phi} (\mathbf {x} _ {n}) - \widetilde {\Phi} \left(f _ {\lambda} (\mathbf {x} _ {n})\right) \right| \right| _ {2} ^ {2} \tag {11} \\ + \left| \left| (I - P) \widetilde {\Phi} \left(f _ {\lambda} \left(\mathbf {x} _ {n}\right)\right) \right| \right| _ {2} ^ {2} \\ \end{array} +$$ + +where $\Lambda$ is the parameter space. The first term encourages $f_{\lambda}(\cdot)$ to perform the same transformation in the input space as $P$ does in the RKHS. The second term ensures that $P$ has no effect on the RKHS features computed on the neutralized $f_{\lambda}(\mathbf{x}_n)$ . + +# 4 Experimental Setup + +In §3, we established an algorithm that allows us to attempt kernelized concept erasure of non-linearly encoded concepts. To summarize, this method requires first solving the game in Eq. (10) for a chosen neutralizing kernel, then training a pre-image network according to Eq. (11) to obtain neutralized representations in the input space. + +We hypothesize that a non-linearly encoded concept can be exhaustively removed after mapping into the right RKHS. In order for this to hold, the neutralized representations must satisfy two conditions: Adversaries using non-linear classifiers should not be able to predict the erased concept from these representations, and these representations should preserve all other information encoded in them prior to erasure. With binary gender as our non-linearly encoded concept, we conduct several experiments testing both conditions. Before presenting our results, we lay out our experimental setup (see App. A.3 for more details). + +Data. We run our main experiments on the identification and erasure of binary gender in static GloVe representations (Pennington et al., 2014). We focus on Ravfogel et al.'s (2020) dataset, where word representations are coupled with binary labels indicating whether they are male-biased or female-biased. As a preprocessing step, we normalize the GloVe representations to have unit norm. For an + +extrinsic evaluation of our method on a main task (profession prediction), we use the Bias-in-Bios dataset of De-Arteaga et al. (2019), which consists of a large set of short biographies annotated for both gender and race. Following Ravfogel et al. (2022), we embed each biography using the [CLS] representation of pre-trained BERT. + +Kernels. We consider the following kernels: + +$\mathbf{\nabla}\cdot \mathbf{Poly}:\kappa (\mathbf{x},\mathbf{y}) = (\gamma \mathbf{x}^{\top}\mathbf{y} + \alpha)^{\mathrm{d}}$ +$\mathsf{RBF}:\kappa (\mathbf{x},\mathbf{y}) = \exp \left(-\gamma ||\mathbf{x} - \mathbf{y}||_2^2\right)$ +- Laplace: $\kappa(\mathbf{x},\mathbf{y}) = \exp(-\gamma||\mathbf{x} - \mathbf{y}||_1)$ +- Linear: $\kappa(\mathbf{x},\mathbf{y}) = \mathbf{x}^\top \mathbf{y}$ +- Sigmoid: $\kappa (\mathbf{x},\mathbf{y}) = \tanh (\gamma \mathbf{x}^{\top}\mathbf{y} + \alpha)$ +- Multiple: a convex combination of the above kernels. We consider the following two methods for combining kernels: + +- EasyMKL : a convex combination learned with the EasyMKL algorithm (Aiolli and Donini, 2015) targeted for gender prediction.7 +- UniformMK : a uniform combination of all kernels. + +We experiment with different values for the hyperparameters $\gamma > 0$ , $\alpha > 0$ and $d > 0$ (see App. A.3 for details). We use $L = 1024$ -dimensional vectors for the Nyström approximation. + +Reported metrics. Each result is reported as the mean $\pm$ standard deviation, computed across four runs of the experiment with random restarts. + +Solving the minimax game. We solve the relaxed adversarial game given in Eq. (10) by alternate gradient-based optimization (Goodfellow et al., 2016). Concretely, we alternate between updating the predictor's parameters $\theta$ and the projection matrix $P$ . Updates to $\theta$ are performed with gradient descent, and updates to $P$ are performed with gradient ascent, including a projection onto the Fantope to ensure that the constraint is met. For the Fantope projection step, we use Vu et al.'s (2013) algorithm, the details of which are restated in Ravfogel et al. (2022). + +Pre-image calculation. As our pre-image network $f_{\lambda}(\cdot)$ , we use an MLP with two hidden layers of sizes 512 and 300, respectively. We use layer normalization after each hidden layer and ReLU activations. See App. A.3.1 for basic empirical validation of the pre-image calculation procedure. + +# 5 Effect on Concept Encoding + +In this section, we pose the exhaustive RKHS hypothesis: The hypothesis that binary gender can be exhaustively removed when the representations are mapped into the right RKHS. That is, there exists a unique kernel such that, for any choice of non-linear predictor, the adversary cannot recover gender information from the pre-image representations obtained via our special kernel. As a baseline, we note that gender prediction accuracy on the original representations, prior to any intervention, is above $99\%$ with every kernel, including the linear kernel. This means that the gender concept is linearly separable in the original input space. In this context, we conduct the following experiments on gender neutralization. + +Same adversary. We start by calculating the neutralized pre-images for each kernel type, and then apply the same kernel adversary to recover gender information. This experiment tests whether we can protect against the same kernel adversary. + +Transfer between kernels. To directly test the exhaustive RKHS hypothesis, we calculate the neutralized pre-image representations with respect to a neutralizing kernel, and then use a different adversarial kernel to recover gender. For instance, we calculate neutralized pre-image representations with respect to a polynomial kernel, and then predict gender using a Laplace kernel, or a polynomial kernel with different hyperparameters. + +Biased associations. Gender can manifest itself in the pre-image representations via biased associations, even when gender is neutralized according to our adversarial test. To assess the impact of our intervention on this notion of gender encoding, we run the WEAT test (Islam et al., 2017) on the neutralized pre-image representations. + +# 5.1 Pre-image Gender Recovery: Same Adversary + +In Table 1, we report average gender prediction accuracy on the neutralized pre-images for the case where the neutralizing and adversarial + +
TypeAccuracy
Poly0.59 ± 0.15
RBF0.69 ± 0.16
Laplace0.75 ± 0.11
Linear0.54 ± 0.02
Sigmoid0.49 ± 0.00
EasyMKL0.69 ± 0.01
UniformMK0.49 ± 0.00
+ +Table 1: Gender prediction accuracy from the neutralized pre-image representations when using the same kernel for neutralization and recovery. Numbers are averages over hyperparameters of each kernel and over four randomized runs of the experiment. + +kernels are of the same type and share the same hyperparameters. $^{8}$ Numbers are averages over the results of all hyperparameter values of each kernel. See App. A.4 for the full results. As can be seen, for most—but not all—kernels, we effectively hinder the ability of the non-linear classifier to predict gender. + +# 5.2 Pre-image Gender Recovery: Transfer Between Kernels + +In Table 2, we report average gender prediction accuracy on the neutralized pre-images for the case where the neutralizing and adversarial kernels are different. Numbers are averages over several different hyperparameter settings for the neutralizing kernel, while the adversarial kernel hyperparameters are fixed as detailed in App. A.7. Also, see the appendix for a full breakdown by neutralizing kernel hyperparameters. Table 2 allows us to test the exhaustive RKHS hypothesis. Under this hypothesis, we would expect to see that for at least one neutralizing kernel, no non-linear classifiers are able to accurately recover gender. For a thorough test of the hypothesis, we introduce an MLP with a single hidden layer as an additional adversary. + +Remarkably, we observe a complete lack of generalization of our concept erasure intervention to other types of non-linear predictors. In particular, no neutralizing kernel significantly hinders the ability of an MLP with a single hidden layer to predict gender: the MLP always recovers the gender labels with an accuracy of $97\%$ . Furthermore, concept erasure does not transfer between differ + +ent kernel types, and even between kernels of the same family with different hyperparameter settings. For instance, when using a polynomial neutralizing kernel, we protect against a polynomial adversarial kernel with the same parameters (mean accuracy of $59\%$ in Table 1), but not against a polynomial adversarial kernel with different hyperparameters $(98\%)$ mean accuracy for Poly in Table 2). + +We do see transfer to sigmoid, and—to a lesser degree—linear kernel adversaries, with a classification accuracy of $54 - 65\%$ for the linear adversary, and $49\%$ for the sigmoid adversary. Surprisingly, the sigmoid kernel seems weaker than the linear kernel, achieving a near-random accuracy of $49\%$ . The results do not show evidence of a proper hierarchy in the expressiveness of the different kernels, and convex combinations of the different kernels—either learned (EasyMKL) or uniform (UniformMK)—do not provide better protection than individual kernels. + +In short, while we are able to effectively protect against the same kernel, transfer to different kernels is non-existent. This result does not support the exhaustive RKHS hypothesis: We do not find a single RKHS that exhaustively encodes the binary gender concept. + +# 5.3 Effect on Gendered Word Associations + +In the case where the concept of gender cannot be recovered by an adversary, binary gender could still manifest itself in more subtle ways. For instance, it may be the case that the names of gender-biased professions, such as STEM fields, are closer in representation space to male-associated words than to female-associated words. We aim to measure the extent to which our neutralized pre-image representations exhibit this measure of gender bias. + +Evaluation. To quantify bias associations, Islam et al. (2017) propose the WEAT word association test. This test measures WEAT's d, a statistic that quantifies the difference in similarity between two sets of gendered words (e.g., male first names and female first names) and two sets of potentially biased words (e.g., stereotypically male and stereotypically female professions). We repeat the experiments of Gonen and Goldberg (2019) and Ravfogel et al. (2020). Following Gonen and + +
PolyRBFLaplaceLinearSigmoidEasyMKLUniformMKMLP
Poly0.98 ± 0.000.96 ± 0.000.93 ± 0.000.55 ± 0.010.49 ± 0.000.98 ± 0.000.49 ± 0.000.97 ± 0.00
RBF0.98 ± 0.000.96 ± 0.000.93 ± 0.000.59 ± 0.020.49 ± 0.000.98 ± 0.000.49 ± 0.000.97 ± 0.00
Laplace0.98 ± 0.000.96 ± 0.000.94 ± 0.000.61 ± 0.010.49 ± 0.000.98 ± 0.000.59 ± 0.020.97 ± 0.00
Linear0.98 ± 0.000.96 ± 0.000.93 ± 0.000.54 ± 0.020.49 ± 0.000.98 ± 0.000.49 ± 0.000.97 ± 0.00
Sigmoid0.98 ± 0.000.93 ± 0.010.89 ± 0.010.65 ± 0.030.49 ± 0.000.97 ± 0.000.64 ± 0.030.97 ± 0.00
EasyMKL0.98 ± 0.000.96 ± 0.000.94 ± 0.000.57 ± 0.030.49 ± 0.000.98 ± 0.000.49 ± 0.000.97 ± 0.00
UniformMK0.98 ± 0.000.96 ± 0.000.94 ± 0.010.58 ± 0.080.49 ± 0.000.98 ± 0.000.49 ± 0.000.97 ± 0.00
+ +Table 2: Evaluation of the neutralized pre-image representations when using a different kernel for neutralization and recovery. The neutralizing kernel is presented in the rows, and the kernel adversary used for recovery in the columns. Note that on the diagonal, the hyperparameters of the same kernel family differ between the neutralizing kernel and the adversary that recovers gender form the pre-image representations. See App. A.7 for a breakdown by neutralizing kernel hyperparameters and for details on the hyperparameters of the kernel adversaries. + +
KernelWEAT's dWEAT's p-value
Poly0.74 ± 0.010.08 ± 0.00
RBF0.74 ± 0.000.08 ± 0.00
Laplace0.71 ± 0.030.09 ± 0.01
Linear0.74 ± 0.000.08 ± 0.00
Sigmoid0.75 ± 0.020.08 ± 0.01
EasyMKL0.73 ± 0.000.08 ± 0.00
UniformMK0.73 ± 0.000.08 ± 0.00
Original1.560.000
+ +Table 3: WEAT results on pre-image representations. Numbers are averages over hyperparameters of each kernel. + +Goldberg (2019), we represent the male and female groups with names commonly associated with males and females, rather than with explicitly gendered words (e.g., pronouns). Three tests evaluate the association between name groups and i) career and family-related words; ii) art and mathematics-related words; and iii) names of artistic and scientific fields. Successful neutralization of gender would imply that these word groups are less closely associated in the pre-image representations. + +Results. In Table 3, we report the test statistic and the $p$ value for the third test using the names of scientific and artistic fields to represent gender-biased words. $^{10}$ For all kernels, we observe a significant drop in the test statistic, from the original value of 1.56 to around 0.74. This suggests that the intervention significantly decreases the association between female and male names and stereotypically biased words. Notably, the reduction is similar for all kernels, including the linear one. While non-linear erasure is more effective in + +neutralizing gender against adversarial recovery, linear and non-linear methods perform equally well according to this bias association test. This finding highlights the importance of measuring different manifestations of a concept when using concept neutralization as a bias mitigation method. + +# 6 Negative Impact on the Representations + +Our method has shown a satisfactory ability to prevent the same kernel from recovering the concept. However, does erasure remove too much information? As previously stated, our intervention should erase a concept without altering any of the other information encoded in the original representations. In this section, we evaluate whether the non-gender related semantic content of the original representations is preserved in our neutralized pre-images. We do so via the following tests: i) an intrinsic evaluation of the semantic content of the neutralized pre-image word representation space, and ii) an extrinsic evaluation of our method when applied to contextualized word representations for a profession prediction task, measuring the extent to which we hinder a model's ability to perform the main task. + +# 6.1 Intrinsic Evaluation of Damage to Semantic Content + +To measure the influence of our method on the semantics encoded in the representation space, we use SimLex-999 (Hill et al., 2015), an annotated dataset of word pairs with human similarity scores for each pair. First, we calculate the cosine similarity between the representations of each pair of words using the original representations. Then, we repeat this calculation for each type + +of kernel using the pre-image representations. Finally, we measure the correlation between the similarity of words in representation space and human similarity scores, before and after intervention. The original correlation is 0.400, and it is left nearly unchanged by any of the kernel interventions, yielding values between 0.387 and 0.396. To qualitatively demonstrate the absence of negative impact, we show in App. A.5 that the nearest neighbors of randomly sampled words do not change significantly after gender erasure. + +# 6.2 Extrinsic Evaluation on Contextualized Representations + +The previous experiments focused on the influence of concept neutralization on uncontextualized representations. Here, we apply our concept neutralization method on contextualized BERT representations and assess its effect on profession prediction. We embed each biography in the dataset of De-Arteaga et al. (2019) using the [CLS] representation of pre-trained BERT, and apply our method using only the RBF kernel. After collecting the preimage representations, we train a linear classifier on the main task of profession prediction. + +Results. Averaged over different hyperparameter settings of the RBF kernel, we achieve a profession prediction accuracy after neutralization of $74.19 \pm 0.056\%$ . For reference, prediction accuracy using the original BERT representations is $76.93\%$ . This suggests that the pre-images still encode most of the profession information, which is largely orthogonal to the neutralized gender information. + +# 7 Discussion + +We have demonstrated that in the case where the neutralizing kernel and the adversarial kernel are the same, we are able to neutralize a non-linearly encoded concept reasonably well. We have also shown that our method neutralizes gender in a comprehensive manner, without damaging the representation. However, this neutralization does not transfer to different non-linear adversaries, which are still able to recover gender. + +While the lack of transfer to other non-linear predictors may seem surprising, one should keep in mind that changing the kernel type, or changing kernel hyperparameters, results in a different implicit feature mapping. Even the features defined by a linear kernel are not a proper subset of the fea + +tures defined by a polynomial kernel of degree 2. $^{12}$ As such, removing the features which make the concept of interest linearly separable in one RKHS does not necessarily prevent a classifier parameterized by another kernel or an MLP from predicting the concept. In the context of gender erasure, these results suggest that protection against a diverse set of non-linear adversaries remains an open problem. + +# 8 Conclusion + +We propose a novel method for the identification and erasure of non-linearly encoded concepts in neural representations. We first map the representations to an RKHS, before identifying and neutralizing the concept in that space. We use our method to empirically assess the exhaustive RKHS hypothesis: We hypothesize that there exists a unique kernel that exhaustively identifies the concept of interest. We find that while we are able to protect against a kernel adversary of the same type, this protection does not transfer to different nonlinear classifiers, thereby contradicting to the RKHS hypothesis. Exhaustive concept erasure and protection against a diverse set of non-linear adversaries remains an open problem. + +# Limitations + +The empirical experiments in this work involve the removal of binary gender information from pretrained representations. We note the fact that gender is a non-binary concept as a major limitation of our work. This task may have real-world applications, in particular relating to fairness. We would encourage readers to be careful when attempting to deploy methods such as the one discussed in this paper. Regardless of any proofs, one should carefully measure the effectiveness of the approach in the context in which it is to be deployed. Please consider, among other things, the exact data to be used, the fairness metrics under consideration, and the overall application. + +We urge practitioners not to regard this method as a solution to the problem of bias in neural models, but rather as a preliminary research effort toward mitigating certain aspects of the problem. Unavoidably, the datasets we use do not reflect all the subtle and implicit ways in which gender bias is manifested. As such, it is likely that different forms + +of bias still exist in the representations following the application of our method. + +# Ethical Concerns + +We do not foresee any ethical concerns with this work. + +# Acknowledgements + +The authors sincerely thank Clément Guerner for his thoughtful and comprehensive comments and revisions to the final version of this work. This project received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program, grant agreement No. 802774 (iEXTRACT). Ryan Cotterell acknowledges Google for support from the Research Scholar Program. + +# References + +Fabio Aiolli and Michele Donini. 2015. *EasyMKL: a scalable multiple kernel learning algorithm*. Neurocomputing, 169:215-224. +M. A. Aizerman, E. A. Braverman, and L. Rozonoer. 1964. Theoretical foundations of the potential function method in pattern recognition learning. Automation and Remote Control, 25(6):917-936. +Nachman Aronszajn. 1950. Theory of reproducing kernels. Transactions of the American Mathematical Society, 68(3):337-404. +Tolga Bolukbasi, Kai-Wei Chang, James Y. Zou, Venkatesh Saligrama, and Adam T. Kalai. 2016. Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. In Advances in Neural Information Processing Systems, volume 29. +Stephen P. Boyd and Lieven Vandenberghe. 2014. Convex Optimization. Cambridge University Press. +Emanuele Bugliarello, Ryan Cotterell, Naoaki Okazaki, and Desmond Elliott. 2021. Multimodal pretraining unmasked: A meta-analysis and a unified framework of vision-and-language BERTs. Transactions of the Association for Computational Linguistics, 9:978-994. +Stéphane Canu and Alex Smola. 2006. Kernel methods and the exponential family. Neurocomputing, 69(7-9):714-720. +Hande Celikkanat, Sami Virpioja, Jörg Tiedemann, and Marianna Apidianaki. 2020. Controlling the imprint of passivization and negation in contextualized representations. In Proceedings of the Third Blackbox NLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 136-148, Online. Association for Computational Linguistics. + +Maria De-Arteaga, Alexey Romanov, Hanna M. Wallach, Jennifer T. Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Cem Geyik, Krishnamaram Kenthapadi, and Adam Tauman Kalai. 2019. Bias in bios: A case study of semantic representation bias in a high-stakes setting. CoRR, abs/1901.09451. +Sunipa Dev and Jeff M Phillips. 2019. Attenuating bias in word vectors. In The 22nd International Conference on Artificial Intelligence and Statistics, pages 879-887. PMLR. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. +Petros Drineas and Michael W. Mahoney. 2005. On the Nyström method for approximating a Gram matrix for improved kernel-based learning. Journal of Machine Learning Research, 6:2153-2175. +Yanai Elazar, Shauli Ravfogel, Alon Jacovi, and Yoav Goldberg. 2021. Amnesic probing: Behavioral explanation with amnesic counterfactuals. Transactions of the Association for Computational Linguistics, 9:160-175. +Ky Fan. 1953. Minimax theorems. Proceedings of the National Academy of Sciences of the United States of America, 39(1):42. +Peter Gärdenfors. 2000. Conceptual Spaces: The Geometry of Thought. The MIT Press. +Gene H. Golub and Charles F. Van Loan. 2013. Matrix Computations. Johns Hopkins Press. +Hila Gonen and Yoav Goldberg. 2019. Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 609-614, Minneapolis, Minnesota. Association for Computational Linguistics. +Hila Gonen, Shauli Ravfogel, Yanai Elazar, and Yoav Goldberg. 2020. It's not Greek to mBERT: Inducing word-level translations from multilingual BERT. In Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 45-56, Online. Association for Computational Linguistics. +Ian Goodfellow, Yoshua Bengio, and Aaron Courville. 2016. Deep Learning. MIT Press. + +Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in Neural Information Processing Systems, volume 27. +Evan Hernandez and Jacob Andreas. 2021. The low-dimensional linear geometry of contextualized word representations. In Proceedings of the 25th Conference on Computational Natural Language Learning, pages 82-93, Online. Association for Computational Linguistics. +Felix Hill, Roi Reichart, and Anna Korhonen. 2015. SimLex-999: Evaluating semantic models with (genuine) similarity estimation. Computational Linguistics, 41(4):665-695. +Thomas Hofmann, Bernhard Scholkopf, and Alexander J. Smola. 2008. Kernel methods in machine learning. The Annals of Statistics, 36(3):1171-1220. +Aylin Caliskan Islam, Joanna J. Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora necessarily contain human biases. Science, 356(6334):183-186. +Rong Jin, Tianbao Yang, Mehrdad Mahdavi, Yu-Feng Li, and Zhi-Hua Zhou. 2013. Improved bounds for the Nyström method with application to kernel classification. IEEE Transactions on Information Theory, 59(10):6939-6949. +Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, and Rory Sayres. 2018. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (TCAV). In Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 2668-2677. PMLR. +George S. Kimeldorf and Grace Wahba. 1970. A correspondence between Bayesian estimation on stochastic processes and smoothing by splines. The Annals of Mathematical Statistics, 41(2):495-502. +Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. ViLBERT: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc. +Rowan Hall Maudslay, Hila Gonen, Ryan Cotterell, and Simone Teufel. 2019. It's all in the name: Mitigating gender bias with name-based counterfactual data substitution. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5267-5275, Hong Kong, China. Association for Computational Linguistics. +Sebastian Mika, Bernhard Scholkopf, Alex Smola, Klaus-Robert Müller, Matthias Scholz, and Gunnar Ratsch. 1998. Kernel PCA and de-noising in feature + +spaces. In Advances in Neural Information Processing Systems, volume 11, pages 536-542. +Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. In 1st International Conference on Learning Representations. +Arik Nemtsov, Amir Averbuch, and Alon Schclar. 2016. Matrix compression using the Nyström method. Intelligent Data Analysis, 20(5):997-1019. +John von Neumann and Oskar Morgenstern. 1944. Theory of Games and Economic Behavior. Princeton University Press. +Evert J. Nyström. 1930. Über die praktische Auflösung von Integralgleichungen mit Anwendungen auf Randwertaufgaben. Acta Mathematica, 54:185-204. +Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics. +Shauli Ravfogel, Yanai Elazar, Hila Gonen, Michael Twiton, and Yoav Goldberg. 2020. Null it out: Guarding protected attributes by iterative nullspace projection. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7237-7256, Online. Association for Computational Linguistics. +Shauli Ravfogel, Grusha Prasad, Tal Linzen, and Yoav Goldberg. 2021. Counterfactual interventions reveal the causal effect of relative clause representations on agreement prediction. In Proceedings of the 25th Conference on Computational Natural Language Learning, pages 194-209, Online. Association for Computational Linguistics. +Shauli Ravfogel, Michael Twiton, Yoav Goldberg, and Ryan Cotterell. 2022. Linear adversarial concept erasure. In Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 18400-18421. PMLR. +Bashir Sadeghi, Runyi Yu, and Vishnu Boddeti. 2019. On the global optima of kernelized adversarial representation learning. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pages 7970-7978. IEEE. +Bernhard Scholkopf, Alexander Smola, and Klaus-Robert Müller. 1997. Kernel principal component analysis. In Artificial Neural Networks — ICANN'97, pages 583–588, Berlin, Heidelberg. Springer Berlin Heidelberg. +Bernhard Scholkopf and Alexander J. Smola. 2002. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press. + +John Shawe-Taylor and Nello Cristianini. 2004. *Kernel Methods for Pattern Analysis*. Cambridge University Press. +Francisco Vargas and Ryan Cotterell. 2020. Exploring the linear subspace hypothesis in gender bias mitigation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2902-2913, Online. Association for Computational Linguistics. +Vincent Q. Vu, Juhee Cho, Jing Lei, and Karl Rohe. 2013. Fantope projection and selection: A near-optimal convex relaxation of sparse PCA. In Advances in Neural Information Processing Systems, volume 26. +Jennifer C. White, Tiago Pimentel, Naomi Saphra, and Ryan Cotterell. 2021. A non-linear structural probe. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 132-138, Online. Association for Computational Linguistics. +Christopher Williams and Matthias Seeger. 2000. Using the Nyström method to speed up kernel machines. In Advances in Neural Information Processing Systems, volume 13. +Kosaku Yosida. 2012. Functional Analysis. Springer. + +# A Related Work + +Mitigation of Gender Bias. The identification of linear subspaces that encode binary gender has attracted considerable research interest (Bolukbasi et al., 2016; Gonen and Goldberg, 2019; Dev and Phillips, 2019; Ravfogel et al., 2020). While bias mitigation is a central use case of concept erasure, concept subspaces have been applied to a number of tasks. Concept subspaces have been used to analyze the content of neural representations, e.g., for causal analysis (Elazar et al., 2021; Ravfogel et al., 2021), for analyzing the geometry of the representation space (Celikkanat et al., 2020; Gonen et al., 2020; Hernandez and Andreas, 2021), and for concept-based interpretability (Kim et al., 2018). + +Kernelization of Linear Methods. The kernelization of linear machine learning algorithms is a common practice, and has many use cases, such as the kernelized perceptron (Aizerman et al., 1964) and kernel PCA (Schölkopf et al., 1997). White et al. (2021) proposed a kernelization of a structural probe that extracts syntactic structure from neural representations. Vargas and Cotterell (2020) proposed a kernelization of the PCA-based bias mitigation method of Bolukbasi et al. (2016), and found that it does not improve on the linear mitigation procedure. Since the effectiveness of this method has been questioned (Gonen and Goldberg, 2019), we consider a more principled and well-motivated approach for the identification and neutralization of the concept subspace. Sadeghi et al. (2019) proposed a kernelization of an alternative, regression-based linear adversarial objective, which is not limited to orthogonal projections. Our formulation is different in that it considers any linear model, and is restricted to the neutralization of linear subspaces via projection. This makes our method potentially less expressive, but more interpretable. + +# A.1 A Representative Lemma for Kernelized Minimax Games + +Lemma 1. (Minimax Game Representative Theorem) Let $\mathcal{H}$ be a reproducing kernel Hilbert space with canonical feature map $\Phi: \mathbb{R}^D \to \mathcal{H}$ , i.e., $\Phi(\mathbf{x}) = \kappa(\mathbf{x}, \cdot)$ . Consider the game: + +$$ +\max _ {\boldsymbol {w} \in \mathcal {H}} \min _ {\boldsymbol {\theta} \in \mathcal {H}} \sum_ {n = 1} ^ {N} \ell \left(y _ {n}, \langle \boldsymbol {\theta}, \mathrm {P} _ {\boldsymbol {w}} ^ {\perp} \boldsymbol {\Phi} (\mathbf {x} _ {n}) \rangle\right) \tag {5} +$$ + +where $\mathrm{P}_{\boldsymbol{w}}^{\perp}$ is the operator that projects onto the orthogonal complement of $\boldsymbol{w}$ . For every attained local optimum $\boldsymbol{\theta}^{*}$ , $\boldsymbol{w}^{*}$ of Eq. (5), there is another local optimum $\boldsymbol{\theta}_{U}^{*}$ , $\boldsymbol{w}_{U}^{*}$ with the same value as $\boldsymbol{\theta}^{*}$ , $\boldsymbol{w}^{*}$ in $U \stackrel{\mathrm{def}}{=} \operatorname{span}\left\{\Phi(\mathbf{x}_1), \ldots, \Phi(\mathbf{x}_N)\right\}$ , the span of the training data. $^{13}$ + +Proof. For brevity, first notice we can re-express the objective as + +$$ +\max _ {\boldsymbol {w}, \in \mathcal {H}} \min _ {\boldsymbol {\theta} \in \mathcal {H}} \sum_ {n = 1} ^ {N} \ell \left(y _ {n}, \langle \boldsymbol {\theta}, \left(\mathrm {I} - \frac {\boldsymbol {w} \boldsymbol {w} ^ {\top}}{\boldsymbol {w} ^ {\top} \boldsymbol {w}}\right) \boldsymbol {\Phi} (\mathbf {x} _ {n}) \rangle\right) \tag {12} +$$ + +We will show that both $\mathbf{w}$ and $\theta$ can be expressed as a linear combination of terms from the training data without losing expressive power. Now, decompose $\mathbf{w}$ as follows: $\mathbf{w} = \mathbf{w}_U + \mathbf{w}_{\perp U}$ , where we represent $\mathbf{w}$ as the sum of $\mathbf{w}$ projected onto $U$ and onto its orthogonal complement $U_{\perp}$ . Now, note that for any element of the training data $\Phi(\mathbf{x}_n)$ , we have + +$$ +\begin{array}{l} \left(\mathrm {I} - \frac {\boldsymbol {w} \boldsymbol {w} ^ {\top}}{\boldsymbol {w} ^ {\top} \boldsymbol {w}}\right) \Phi (\mathbf {x} _ {n}) = \left(\mathrm {I} - \frac {\left(\boldsymbol {w} _ {U} + \boldsymbol {w} _ {\perp U}\right) \left(\boldsymbol {w} _ {U} + \boldsymbol {w} _ {\perp U}\right) ^ {\top}}{\boldsymbol {w} ^ {\top} \boldsymbol {w}}\right) \Phi (\mathbf {x} _ {n}) (13a) \\ = \left(I - \frac {\boldsymbol {w} _ {U} \boldsymbol {w} _ {U} ^ {\top}}{\boldsymbol {w} ^ {\top} \boldsymbol {w}} - \underbrace {\boldsymbol {w} _ {U} \boldsymbol {w} _ {\perp U} ^ {\top}} _ {= 0} - \underbrace {\boldsymbol {w} _ {\perp U} \boldsymbol {w} _ {U} ^ {\top}} _ {= 0} - \frac {\boldsymbol {w} _ {\perp U} \boldsymbol {w} _ {\perp U} ^ {\top}}{\boldsymbol {w} ^ {\top} \boldsymbol {w}}\right) \Phi (\mathbf {x} _ {n}) (13b) \\ = \left(\mathrm {I} - \frac {\boldsymbol {w} _ {U} \boldsymbol {w} _ {U} ^ {\top}}{\boldsymbol {w} ^ {\top} \boldsymbol {w}} - \frac {\boldsymbol {w} _ {\perp U} \boldsymbol {w} _ {\perp U} ^ {\top}}{\boldsymbol {w} ^ {\top} \boldsymbol {w}}\right) \Phi (\mathbf {x} _ {n}) (13c) \\ = \boldsymbol {\Phi} \left(\mathbf {x} _ {n}\right) - \frac {\boldsymbol {w} _ {U} \boldsymbol {w} _ {U} ^ {\top}}{\boldsymbol {w} ^ {\top} \boldsymbol {w}} \boldsymbol {\Phi} \left(\mathbf {x} _ {n}\right) - \frac {\boldsymbol {w} _ {\perp U} \boldsymbol {w} _ {\perp U} ^ {\top}}{\boldsymbol {w} ^ {\top} \boldsymbol {w}} \boldsymbol {\Phi} \left(\mathbf {x} _ {n}\right) (13d) \\ = \boldsymbol {\Phi} \left(\mathbf {x} _ {n}\right) - \boldsymbol {w} _ {U} \frac {\boldsymbol {w} _ {U} ^ {\top} \boldsymbol {\Phi} \left(\mathbf {x} _ {n}\right)}{\boldsymbol {w} ^ {\top} \boldsymbol {w}} (13e) \\ \end{array} +$$ + +However, we have that $\Phi (\mathbf{x}_n) - \pmb {w}_U\frac{\pmb{w}_U^\top\Phi(\mathbf{x}_n)}{\pmb{w}^\top\pmb{w}}$ is in $U$ . Likewise, we can decompose $\pmb {\theta} = \pmb {\theta}_U + \pmb {\theta}_{\perp U}$ Further manipulation reveals + +$$ +\left\langle \boldsymbol {\theta}, \left(\mathrm {I} - \boldsymbol {w} \boldsymbol {w} ^ {\top}\right) \boldsymbol {\Phi} \left(\mathbf {x} _ {n}\right) \right\rangle = \boldsymbol {\theta} _ {U} ^ {\top} \boldsymbol {\Phi} \left(\mathbf {x} _ {n}\right) - \boldsymbol {\theta} _ {U} ^ {\top} \boldsymbol {w} _ {U} \frac {\boldsymbol {w} _ {U} ^ {\top} \boldsymbol {\Phi} \left(\mathbf {x} _ {n}\right)}{\boldsymbol {w} ^ {\top} \boldsymbol {w}} \tag {14} +$$ + +Thus, for any $\theta, w, \in \mathcal{H}$ there exists a $\theta_U, w_U \in U$ that yields the same value of the objective as $\theta, w$ . Now, we can parameterize $\theta_U$ and $w_U$ as + +$$ +\boldsymbol {w} _ {U} = \sum_ {n = 1} ^ {N} \alpha_ {n} \boldsymbol {\Phi} (\mathbf {x} _ {n}) \tag {15} +$$ + +$$ +\boldsymbol {\theta} _ {U} = \sum_ {n = 1} ^ {N} \beta_ {n} \boldsymbol {\Phi} (\mathbf {x} _ {n}) \tag {16} +$$ + +for real coefficients $\alpha \in \mathbb{R}^N$ and $\beta \in \mathbb{R}^N$ . We conclude that for any local optimum $\theta^{*}$ , $w^{*} \in \mathcal{H}$ , the projection of $\theta^{*}$ and $w^{*}$ onto $U$ yields a local optimum with the same value. + +We note that under regularity conditions, i.e., certain compactness and convexity restrictions over the feasible sets and the loss function, the min and the max can be swapped as per the celebrated Von Neumann-Fan minimax theorem (Fan, 1953). For the aforementioned reasons, we believe Lemma 1 justifies the parameterizations used for $\theta, w$ , e.g., Eq. (19). + +# A.2 Kernelization of the Minimax Game + +We show that the game Eq. (1) can be kernelized for the case $k = 1$ , i.e., a setting where the matrix $\mathcal{P}_k$ removes a one-dimensional subspace. Specifically, we will show that the product $\langle \pmb{\theta}, \mathrm{P}\Phi(\mathbf{x}_n) \rangle$ in Eq. (1) can be expressed as a function of the kernel $\kappa(\cdot, \cdot)$ . + +Lemma 2. Let $\mathcal{H}$ be a reproducing kernel Hilbert space with canonical feature map $\Phi$ , and let $\Phi(\mathbf{z})$ be a point in $\mathcal{H}$ . Next, let $\boldsymbol{w} = \sum_{n=1}^{N} \alpha_n \Phi(\mathbf{x}_n)$ and $\boldsymbol{\theta} = \sum_{n=1}^{N} \beta_n \Phi(\mathbf{x}_n)$ be points in the reproducing kernel Hilbert space. Now, let $\Phi_{proj}(\mathbf{z})$ be the orthogonal projection of $\Phi(\mathbf{z})$ onto the orthogonal complement of the subspace spanned by $\boldsymbol{w}$ . Then, we have: + +$$ +\begin{array}{l} \langle \boldsymbol {\theta}, \boldsymbol {\Phi} _ {p r o j} (\mathbf {z}) \rangle = \sum_ {m = 1} ^ {N} \beta_ {m} \left(\kappa \left(\mathbf {x} _ {m}, \mathbf {z}\right) \right. \\ \left. - \frac {\boldsymbol {\alpha} ^ {\top} K ^ {(m)} (\mathbf {z}) \boldsymbol {\alpha}}{\boldsymbol {\alpha} ^ {\top} K \boldsymbol {\alpha}}\right) \tag {6} \\ \end{array} +$$ + +where $K_{ij}^{(m)}(\mathbf{z})\stackrel {\mathrm{def}}{=}\kappa (\mathbf{x}_i,\mathbf{z})\kappa (\mathbf{x}_m,\mathbf{x}_j).$ + +Proof. The projection onto the orthogonal complement of $\boldsymbol{w} = \sum_{n=1}^{N} \alpha_n \Phi(\mathbf{x}_n)$ is defined as the following + +$$ +P _ {\boldsymbol {w}} ^ {\perp} \stackrel {\text {d e f}} {=} I - \frac {\left(\sum_ {n = 1} ^ {N} \alpha_ {n} \boldsymbol {\Phi} (\mathbf {x} _ {i})\right) \left(\sum_ {n = 1} ^ {N} \alpha_ {n} \boldsymbol {\Phi} (\mathbf {x} _ {n}) ^ {\top}\right)}{\left(\sum_ {n = 1} ^ {N} \alpha_ {n} \boldsymbol {\Phi} (\mathbf {x} _ {n}) ^ {\top}\right) \left(\sum_ {n = 1} ^ {N} \alpha_ {n} \boldsymbol {\Phi} (\mathbf {x} _ {n})\right)} \tag {17} +$$ + +where I is the identity operator. Algebraic manipulation reveals + +$$ +\begin{array}{l} \mathrm {P} _ {\boldsymbol {w}} ^ {\perp} \boldsymbol {\Phi} (\mathbf {z}) \stackrel {\text {d e f}} {=} \mathrm {I} \boldsymbol {\Phi} (\mathbf {z}) - \frac {\left(\sum_ {n = 1} ^ {N} \alpha_ {n} \boldsymbol {\Phi} \left(\mathbf {x} _ {n}\right)\right) \left(\sum_ {n = 1} ^ {N} \alpha_ {n} \boldsymbol {\Phi} \left(\mathbf {x} _ {n ^ {\prime}}\right) ^ {\top}\right)}{\left(\sum_ {n = 1} ^ {N} \alpha_ {n} \boldsymbol {\Phi} \left(\mathbf {x} _ {n}\right) ^ {\top}\right) \left(\sum_ {n = 1} ^ {N} \alpha_ {n} \boldsymbol {\Phi} \left(\mathbf {x} _ {n}\right)\right)} \boldsymbol {\Phi} (\mathbf {z}) (18a) \\ = \boldsymbol {\Phi} (\mathbf {z}) - \frac {\sum_ {n = 1} ^ {N} \sum_ {m = 1} ^ {N} \alpha_ {n} \alpha_ {m} \boldsymbol {\Phi} \left(\mathbf {x} _ {n}\right) \boldsymbol {\Phi} \left(\mathbf {x} _ {m}\right) ^ {\top} \boldsymbol {\Phi} (\mathbf {z})}{\sum_ {n = 1} ^ {N} \sum_ {m = 1} ^ {N} \alpha_ {n} \alpha_ {m} \boldsymbol {\Phi} \left(\mathbf {x} _ {n}\right) ^ {\top} \boldsymbol {\Phi} \left(\mathbf {x} _ {m}\right)} (18b) \\ = \boldsymbol {\Phi} (\mathbf {z}) - \frac {\sum_ {n = 1} ^ {N} \sum_ {m = 1} ^ {N} \alpha_ {n} \alpha_ {m} \boldsymbol {\Phi} (\mathbf {x} _ {n}) \kappa (\mathbf {x} _ {m} , \mathbf {z})}{\sum_ {n = 1} ^ {N} \sum_ {m = 1} ^ {N} \alpha_ {n} \alpha_ {m} \kappa (\mathbf {x} _ {n} , \mathbf {x} _ {m})} (18c) \\ = \Phi (\mathbf {z}) - \underbrace {\left(\frac {\sum_ {m = 1} ^ {N} \alpha_ {m} \kappa \left(\mathbf {x} _ {m} , \mathbf {z}\right)}{\boldsymbol {\alpha} ^ {\top} K \boldsymbol {\alpha}}\right)} _ {\in \mathbb {R}} \sum_ {n = 1} ^ {N} \alpha_ {n} \Phi \left(\mathbf {x} _ {n}\right) (18d) \\ = \boldsymbol {\Phi} (\mathbf {z}) - \left(\frac {\sum_ {m = 1} ^ {N} \alpha_ {m} \kappa \left(\mathbf {x} _ {n} , \mathbf {z}\right)}{\boldsymbol {\alpha} ^ {\top} K \boldsymbol {\alpha}}\right) \boldsymbol {w} (18e) \\ \stackrel {\text {d e f}} {=} \Phi_ {\operatorname {p r o j}} (\mathbf {z}) (18f) \\ \end{array} +$$ + +Now, consider an element of the reproducing kernel Hilbert space + +$$ +\boldsymbol {\theta} = \sum_ {n = 1} ^ {N} \beta_ {n} \boldsymbol {\Phi} (\mathbf {x} _ {n}) \tag {19} +$$ + +Further algebraic manipulation reveals + +$$ +\begin{array}{l} \left\langle \boldsymbol {\theta}, \boldsymbol {\Phi} _ {\operatorname {p r o j}} (\mathbf {z}) \right\rangle = \left\langle \boldsymbol {\theta}, \boldsymbol {\Phi} (\mathbf {z}) - \left(\frac {\sum_ {n = 1} ^ {N} \alpha_ {n} \kappa \left(\mathbf {x} _ {n} , \mathbf {z}\right)}{\boldsymbol {\alpha} ^ {\top} K \boldsymbol {\alpha}}\right) \boldsymbol {w} \right\rangle (20a) \\ = \left\langle \sum_ {m = 1} ^ {N} \beta_ {m} \boldsymbol {\Phi} (\mathbf {x} _ {m}), \boldsymbol {\Phi} (\mathbf {z}) - \left(\frac {\sum_ {n = 1} ^ {N} \alpha_ {n} \kappa (\mathbf {x} _ {n} , \mathbf {z})}{\boldsymbol {\alpha} ^ {\top} K \boldsymbol {\alpha}}\right) \boldsymbol {w} \right\rangle (20b) \\ = \sum_ {m = 1} ^ {N} \beta_ {m} \kappa (\mathbf {x} _ {m}, \mathbf {z}) - \sum_ {m = 1} ^ {N} \beta_ {m} \left(\frac {\sum_ {n = 1} ^ {N} \alpha_ {n} \kappa (\mathbf {x} _ {n} , \mathbf {z})}{\boldsymbol {\alpha} ^ {\top} K \boldsymbol {\alpha}}\right) \kappa (\mathbf {x} _ {m}, \boldsymbol {w}) (20c) \\ = \sum_ {m = 1} ^ {N} \beta_ {m} \kappa (\mathbf {x} _ {m}, \mathbf {z}) - \frac {1}{\boldsymbol {\alpha} ^ {\top} K \boldsymbol {\alpha}} \sum_ {n = 1} ^ {N} \sum_ {m = 1} ^ {N} \alpha_ {n} \beta_ {m} \kappa (\mathbf {x} _ {n}, \mathbf {z}) \kappa (\mathbf {x} _ {m}, \boldsymbol {w}) (20d) \\ = \sum_ {m = 1} ^ {N} \beta_ {m} \kappa (\mathbf {x} _ {m}, \mathbf {z}) - \sum_ {m = 1} ^ {N} \beta_ {m} \frac {\left(\sum_ {n = 1} ^ {N} \alpha_ {n} \kappa (\mathbf {x} _ {n} , \mathbf {z}) \kappa (\mathbf {x} _ {m} , \boldsymbol {w})\right)}{\boldsymbol {\alpha} ^ {\top} K \boldsymbol {\alpha}} (20e) \\ = \sum_ {m = 1} ^ {N} \beta_ {m} \kappa (\mathbf {x} _ {m}, \mathbf {z}) - \sum_ {m = 1} ^ {N} \beta_ {m} \frac {\left(\sum_ {n = 1} ^ {N} \alpha_ {n} \kappa (\mathbf {x} _ {n} , \mathbf {z}) \boldsymbol {\Phi} (\mathbf {x} _ {m}) ^ {\top} \left(\sum_ {n ^ {\prime} = 1} ^ {N} \alpha_ {n ^ {\prime}} \boldsymbol {\Phi} (\mathbf {x} _ {n ^ {\prime}})\right)\right)}{\boldsymbol {\alpha} ^ {\top} K \boldsymbol {\alpha}} (20f) \\ = \sum_ {m = 1} ^ {N} \beta_ {m} \kappa (\mathbf {x} _ {m}, \mathbf {z}) - \sum_ {m = 1} ^ {N} \beta_ {m} \frac {\left(\sum_ {n = 1} ^ {N} \sum_ {n ^ {\prime} = 1} ^ {N} \alpha_ {n ^ {\prime}} \alpha_ {n} \kappa (\mathbf {x} _ {n} , \mathbf {z}) \boldsymbol {\Phi} (\mathbf {x} _ {m}) ^ {\top} \boldsymbol {\Phi} (\mathbf {x} _ {n ^ {\prime}})\right)}{\boldsymbol {\alpha} ^ {\top} K \boldsymbol {\alpha}} (20g) \\ \end{array} +$$ + +$$ +\begin{array}{l} = \sum_ {m = 1} ^ {N} \beta_ {m} \kappa (\mathbf {x} _ {m}, \mathbf {z}) - \sum_ {m = 1} ^ {N} \beta_ {m} \frac {\left(\sum_ {n = 1} ^ {N} \sum_ {n ^ {\prime} = 1} ^ {N} \alpha_ {n} \alpha_ {n ^ {\prime}} \kappa (\mathbf {x} _ {n} , \mathbf {z}) \kappa (\mathbf {x} _ {m} , \mathbf {x} _ {n ^ {\prime}})\right)}{\boldsymbol {\alpha} ^ {\top} K \boldsymbol {\alpha}} (20h) \\ = \sum_ {m = 1} ^ {N} \beta_ {m} \left(\kappa (\mathbf {x} _ {m}, \mathbf {z}) - \frac {\boldsymbol {\alpha} ^ {\top} K ^ {(m)} (\mathbf {z}) \boldsymbol {\alpha}}{\boldsymbol {\alpha} ^ {\top} K \boldsymbol {\alpha}}\right) (20i) \\ \end{array} +$$ + +where we define the following matrix component-wise: $K_{ij}^{(m)}(\mathbf{z}) \stackrel{\mathrm{def}}{=} \kappa(\mathbf{x}_i, \mathbf{z}) \kappa(\mathbf{x}_m, \mathbf{x}_j)$ . Eq. (20i) can be evaluated without explicitly applying the kernel transformation $\Phi$ . In terms of notation, when we have $\left\langle \theta, \Phi_{\mathrm{proj}}(\mathbf{z}_n) \right\rangle$ , we write + +$$ +\sum_ {m = 1} ^ {N} \beta_ {m} \left(\kappa \left(\mathbf {x} _ {m}, \mathbf {z} _ {n}\right) - \frac {\boldsymbol {\alpha} ^ {\top} K ^ {(m , n)} \boldsymbol {\alpha}}{\boldsymbol {\alpha} ^ {\top} K \boldsymbol {\alpha}}\right) \tag {21} +$$ + +where we define $K_{ij}^{(m,n)}(\mathbf{z}) \stackrel{\mathrm{def}}{=} \kappa(\mathbf{x}_i, \mathbf{z}_n) \kappa(\mathbf{x}_m, \mathbf{x}_j)$ . This proves the result. + +# A.3 Experimental setting + +Data. We conduct experiments on the uncased version of the GloVe representations, which are 300-dimensional, licensed under Apache License, Version 2.0. Following Ravfogel et al. (2020), to approximate the gender labels for the vocabulary, we project all representations on the he - she direction, and take the $7,500$ most male-biased and female-biased words. Note that unlike (Bolukbasi et al., 2016), we use the he - she direction only to induce approximate gender labels, but then proceed to measure the bias in various ways that go beyond neutralizing just the he - she direction. We use the same train-dev-test split of Ravfogel et al. (2020), but discard the gender-neutral words (i.e., we cast the problem as a binary classification). We obtain training, evaluation, and test sets of sizes 7,350, 3,150 and 4,500, respectively. We perform four independent runs of the entire method for all kernel types, with different random seeds. + +The kernelized minimax game. For each kernel, we experiment with the following combinations of hyperparameter values: + +- $\operatorname{Poly} : \mathsf{d} \in \{2,3\}$ ; $\gamma \in \{0.05, 0.1, 0.15\}$ ; $\alpha \in \{0.8, 1, 1.2\}$ . +- RBF: $\gamma \in \{0.1, 0.15, 0.2\}$ +- Laplace: $\gamma \in \{0.1, 0.15, 0.2\}$ +- Sigmoid: $\alpha \in \{0, 0.01\}$ ; $\gamma \in \{0.005, 0.003\}$ . + +We approximate the kernel space using $L = 1024$ Nyström landmarks. We run the adversarial game Eq. (10) for each of the kernel mappings we consider, by performing alternate minimization and maximization over $\theta$ and $P$ , respectively. As our optimization procedure, we use stochastic gradient descent with a learning rate of 0.08 and minibatches of size 256. We run for 35,000 batches, and choose the projection matrix $P$ which leads to the biggest decrease in the linear classification accuracy on the evaluation set. In all cases, we identify a matrix which decreases classification accuracy to near-random accuracy. All training is done on a single NVIDIA GeForce GTX 1080 Ti GPU. + +Pre-image mapping. We train an MLP with 2 hidden layers of sizes 512 and 300 to map the original inputs $\mathbf{x}_n$ to inputs which, after being mapped to kernel space, are close to the neutralized features. We use dropout of 0.1, ReLU activation and layer normalization after each hidden layer. We use a skip connection between the input and the output layer, i.e., we set the final output of the MLP to be the sum of its inputs and outputs. [14] We train for 15,000 batches of size 128 and choose the model that yields the lowest loss on the evaluation set. + +Non-linear gender prediction. We consider the following non-linear predictors: SVMs with different kernels, as well as an MLP with 128 hidden units and ReLU activations. We use the sklearn implementation of predictors. They are trained on the reconstructed pre-image of the training set, and tested on the reconstructed pre-image of the test set. Note that while in training we used an approximation of the kernel function, we predict gender from the pre-images using SVM classifiers that rely on the actual, exact kernel. + +# A.3.1 Pipeline Evaluation + +In this appendix, we include sanity check experiments that aim to assess whether the minimax game Eq. (10) effectively removes linearly-present concepts from the non-linear kernel features, and whether the training of the pre-image network succeeds. + +Concept erasure in kernel space. Do we effectively neutralize the concept in the approximate kernel space? For each kernel, we solve the game Eq. (10) and use the final projection matrix $P$ to create neutralized features. We neutralize the features in RKHS by mapping $\widetilde{\Phi}(\mathbf{x}_n) \mapsto P\widetilde{\Phi}(\mathbf{x}_n)$ , and train a linear classifier to recover the gender labels from the neutralized representations. We get a classification accuracy of $50.59 \pm 0.04$ , very close to majority accuracy of 50.58. This suggests that the process is effective in protecting against the kernel which was applied in training (training a linear classifier on the kernel-transformed representations is equivalent to training a kernel classifier on the original representations). Notice, however, that we cannot test other non-linear kernel classifiers on these representations in a similar way: If the approximate kernel mapping $\widetilde{\Phi}(\cdot)$ corresponds, for example, to a polynomial kernel, we cannot measure the success of an RBF kernel in recovering the bias information after the intervention without performing the pre-image mapping. + +Pre-image mapping. Our neutralization algorithm relies on calculating the pre-image of the kernel features after the intervention. To evaluate the quality of the pre-image mapping, we measure the relative reconstruction error $\left\| \frac{P\widetilde{\Phi}(\mathbf{x}_n) - \widetilde{\Phi}(f(\mathbf{x}_n))}{P\widetilde{\Phi}(\mathbf{x}_n)} \right\|_2^2$ over all points $\mathbf{x}_n$ on the evaluation set. When averaged over all 4 seeds and the different kernels we experimented with, we get a reconstruction error of $1.81 \pm 1.61\%$ (range 0.45-7.94). + +# A.4 Gender Prediction from the Pre-image + +In Table 4 we report the full evaluation results on the pre-image neutralized representations, where we have used the same kernel (and the same hyperparameters) for neutralization and gender recovery from the pre-image. + +
KernelγαdWEAT's dWEAT's p-valueGender Acc.
Poly0.050.820.73 ± 0.010.084 ± 0.0020.49 ± 0.00
Poly0.05120.74 ± 0.010.080 ± 0.0020.49 ± 0.00
Poly0.051.220.74 ± 0.010.081 ± 0.0020.49 ± 0.00
Poly0.050.830.75 ± 0.010.077 ± 0.0030.49 ± 0.00
Poly0.05130.74 ± 0.010.080 ± 0.0030.49 ± 0.00
Poly0.051.230.74 ± 0.010.081 ± 0.0030.49 ± 0.00
Poly0.10.820.74 ± 0.010.080 ± 0.0030.49 ± 0.00
Poly0.1120.74 ± 0.000.080 ± 0.0010.49 ± 0.00
Poly0.11.220.74 ± 0.010.081 ± 0.0020.49 ± 0.00
Poly0.10.830.74 ± 0.000.081 ± 0.0020.53 ± 0.01
Poly0.1130.73 ± 0.010.082 ± 0.0040.65 ± 0.03
Poly0.11.230.73 ± 0.010.084 ± 0.0040.72 ± 0.01
Poly0.150.820.73 ± 0.010.082 ± 0.0030.56 ± 0.03
Poly0.15120.74 ± 0.020.081 ± 0.0060.54 ± 0.02
Poly0.151.220.73 ± 0.010.084 ± 0.0040.55 ± 0.02
Poly0.150.830.73 ± 0.010.082 ± 0.0030.88 ± 0.02
Poly0.15130.73 ± 0.010.082 ± 0.0030.92 ± 0.00
Poly0.151.230.74 ± 0.010.079 ± 0.0030.93 ± 0.00
RBF0.1--0.75 ± 0.010.078 ± 0.0030.49 ± 0.01
RBF0.15--0.74 ± 0.010.079 ± 0.0030.68 ± 0.03
RBF0.2--0.74 ± 0.010.081 ± 0.0030.89 ± 0.01
Laplace0.1--0.72 ± 0.030.086 ± 0.0080.62 ± 0.04
Laplace0.15--0.74 ± 0.050.080 ± 0.0150.77 ± 0.05
Laplace0.2--0.67 ± 0.050.107 ± 0.0200.88 ± 0.04
Linear---0.74 ± 0.010.079 ± 0.0040.54 ± 0.02
Sigmoid0.0050-0.78 ± 0.050.069 ± 0.0140.49 ± 0.00
Sigmoid0.0050.01-0.73 ± 0.030.082 ± 0.0100.49 ± 0.00
Sigmoid0.0030-0.73 ± 0.050.083 ± 0.0170.49 ± 0.00
Sigmoid0.0030.01-0.76 ± 0.030.074 ± 0.0080.49 ± 0.00
EasyMKL---0.73 ± 0.010.084 ± 0.0050.69 ± 0.01
UniformMKL---0.73 ± 0.010.084 ± 0.0020.49 ± 0.00
Original---1.560.000≥ 0.99
+ +Table 4: Evaluation of the neutralized pre-image representations. We show the WEAT test's statistics and $p$ -value, as well as the gender prediction accuracy of a kernel classifier of the same type as the one applied during neutralization. + +# A.5 Closest Neighbors + +In Table 5, we show the closest neighbors to randomly-sampled word representations before and after gender erasure under the polynomial kernel. The results for other kernels are qualitatively similar. + +
WordNeighbors beforeNeighbors after
spiritualfaith, religious, healingemotional, religious, healing
lessonlearn, teach, lessonsteaching, teach, lessons
facesfaced, facing, facefaced, facing, face
forgetknow, let, rememberknow, let, remember
converteripod, conversion, convertipod, conversion, convert
cleankeep, wash, cleaningkeep, wash, cleaning
formalelegant, dress, appropriateelegant, appropriate, dress
identityidentify, context, identificationcontext, identify, identification
otherthese, those, manythese, those, many
licensedregistered, certified, licenseregistered, certified, license
ratingsreviews, rated, ratingreviews, rated, rating
properlyproper, effectively, correctlyeffectively, proper, correctly
buildcreate, built, buildingbuilt, create, building
solutionssystems, technologies, solutionservices, technologies, solution
afghanistantroops, pakistan, iraqtroops, pakistan, iraq
wallpaperdesktop, pictures, picturedesktop, pictures, picture
soundaudio, noise, soundsaudio, noise, sounds
gendersexual, male, agemale, differences, age
boatcruise, ship, fishingcruise, ship, fishing
downtownportland, city, neighborhoodportland, neighborhood, city
lawyersattorney, lawyer, attorneysattorney, lawyer, attorneys
smarthow, easy, intelligentwise, easy, intelligent
spendingbudget, spent, spendbudget, spent, spend
contestwinners, winner, competitionwinners, winner, competition
wantn’t, know, needn’t, know, need
adviceguidance, suggestions, tipsguidance, suggestions, tips
professionalsmanagers, professional, expertsmanagers, professional, experts
gd, b, fd, b, f
australianzealand, british, australiazealand, british, australia
namo, o, damo, o, da
+ +Table 5: Closest neighbors to randomly-sampled words from GloVe vocabulary, for the original representations, and for the pre-images after our intervention. + +# A.6 WEAT Results + +Here we report the results of the WEAT test for the career and family-related words (Table 6) and art and mathematics-related words (Table 7). + +
KernelγαdWEAT-dp-value
Poly0.050.820.72 ± 0.000.093 ± 0.002
Poly0.05120.72 ± 0.010.091 ± 0.003
Poly0.051.220.72 ± 0.010.093 ± 0.004
Poly0.050.830.73 ± 0.000.089 ± 0.001
Poly0.05130.74 ± 0.010.086 ± 0.004
Poly0.051.230.73 ± 0.010.089 ± 0.002
Poly0.10.820.72 ± 0.010.091 ± 0.005
Poly0.1120.73 ± 0.000.089 ± 0.001
Poly0.11.220.72 ± 0.010.090 ± 0.003
Poly0.10.830.72 ± 0.010.090 ± 0.003
Poly0.1130.72 ± 0.010.090 ± 0.003
Poly0.11.230.72 ± 0.010.091 ± 0.002
Poly0.150.820.71 ± 0.010.095 ± 0.004
Poly0.15120.74 ± 0.000.087 ± 0.001
Poly0.151.220.72 ± 0.010.091 ± 0.002
Poly0.150.830.72 ± 0.000.093 ± 0.001
Poly0.15130.72 ± 0.010.092 ± 0.002
Poly0.151.230.73 ± 0.010.090 ± 0.005
RBF0.1--0.72 ± 0.010.090 ± 0.003
RBF0.15--0.73 ± 0.010.090 ± 0.005
RBF0.2--0.72 ± 0.010.091 ± 0.003
Laplace0.1--0.75 ± 0.050.083 ± 0.017
Laplace0.15--0.77 ± 0.020.076 ± 0.007
Laplace0.2--0.70 ± 0.020.098 ± 0.008
Linear---0.72 ± 0.020.090 ± 0.006
Sigmoid0.0050-0.73 ± 0.030.087 ± 0.010
Sigmoid0.0050.01-0.73 ± 0.020.087 ± 0.006
Sigmoid0.0030-0.76 ± 0.060.079 ± 0.019
Sigmoid0.0030.01-0.76 ± 0.070.080 ± 0.020
EasyMKL---0.72 ± 0.020.091 ± 0.005
UniformMKL---0.72 ± 0.000.092 ± 0.002
Original---1.690.000
+ +Table 6: Word association bias test (WEAT) for career and family-related terms + +
KernelγαdWEAT-dp-value
Poly0.050.820.78 ± 0.000.068 ± 0.001
Poly0.05120.78 ± 0.010.067 ± 0.002
Poly0.051.220.78 ± 0.000.067 ± 0.001
Poly0.050.830.78 ± 0.010.066 ± 0.002
Poly0.05130.78 ± 0.010.066 ± 0.002
Poly0.051.230.78 ± 0.000.066 ± 0.001
Poly0.10.820.78 ± 0.000.068 ± 0.001
Poly0.1120.78 ± 0.010.066 ± 0.002
Poly0.11.220.77 ± 0.010.069 ± 0.004
Poly0.10.830.78 ± 0.000.067 ± 0.001
Poly0.1130.77 ± 0.010.070 ± 0.001
Poly0.11.230.78 ± 0.010.068 ± 0.002
Poly0.150.820.78 ± 0.010.068 ± 0.003
Poly0.15120.78 ± 0.010.068 ± 0.003
Poly0.151.220.78 ± 0.000.068 ± 0.001
Poly0.150.830.78 ± 0.010.067 ± 0.002
Poly0.15130.78 ± 0.010.067 ± 0.002
Poly0.151.230.77 ± 0.010.069 ± 0.002
RBF0.1--0.79 ± 0.010.066 ± 0.003
RBF0.15--0.78 ± 0.010.067 ± 0.002
RBF0.2--0.78 ± 0.010.067 ± 0.002
Laplace0.1--0.80 ± 0.040.064 ± 0.012
Laplace0.15--0.81 ± 0.030.061 ± 0.009
Laplace0.2--0.77 ± 0.040.070 ± 0.013
linear---0.79 ± 0.010.066 ± 0.002
Sigmoid0.0050-0.82 ± 0.040.057 ± 0.010
Sigmoid0.0050.01-0.77 ± 0.050.070 ± 0.012
Sigmoid0.0030-0.76 ± 0.030.073 ± 0.009
Sigmoid0.0030.01-0.79 ± 0.030.066 ± 0.009
EasyMKL---0.78 ± 0.000.066 ± 0.001
UniformMKL---0.78 ± 0.010.069 ± 0.002
Original---1.560.000
+ +Table 7: Word association bias test (WEAT) for art and mathematics-related terms + +# A.7 Transfer Results + +
UniformMKEasyMKLRBFPolyLaplaceSigmoidLinearMLP
poly, γ =0.05, d= 2, α =0.80.49 ± 0.000.98 ± 0.000.96 ± 0.000.98 ± 0.000.94 ± 0.010.49 ± 0.000.61 ± 0.060.97 ± 0.00
poly, γ =0.05, d= 2, α =10.49 ± 0.000.98 ± 0.000.96 ± 0.010.98 ± 0.000.94 ± 0.000.49 ± 0.000.52 ± 0.010.97 ± 0.00
poly, γ =0.05, d= 2, α =1.20.49 ± 0.000.98 ± 0.000.96 ± 0.000.98 ± 0.000.94 ± 0.000.49 ± 0.000.53 ± 0.020.97 ± 0.00
poly, γ =0.05, d= 3, α =0.80.49 ± 0.000.98 ± 0.000.96 ± 0.010.98 ± 0.000.93 ± 0.010.49 ± 0.000.54 ± 0.030.97 ± 0.00
poly, γ =0.05, d= 3, α =10.49 ± 0.000.98 ± 0.000.96 ± 0.000.98 ± 0.000.94 ± 0.010.49 ± 0.000.59 ± 0.050.97 ± 0.00
poly, γ =0.05, d= 3, α =1.20.49 ± 0.000.98 ± 0.000.96 ± 0.000.98 ± 0.000.94 ± 0.000.49 ± 0.000.53 ± 0.010.97 ± 0.00
poly, γ =0.1, d= 2, α =0.80.49 ± 0.000.98 ± 0.000.96 ± 0.000.98 ± 0.000.93 ± 0.010.49 ± 0.000.54 ± 0.040.97 ± 0.00
poly, γ =0.1, d= 2, α =10.49 ± 0.000.98 ± 0.000.96 ± 0.000.98 ± 0.000.93 ± 0.010.49 ± 0.000.53 ± 0.020.97 ± 0.00
poly, γ =0.1, d= 2, α =1.20.49 ± 0.000.98 ± 0.000.96 ± 0.000.98 ± 0.000.94 ± 0.000.49 ± 0.000.54 ± 0.010.97 ± 0.00
poly, γ =0.1, d= 3, α =0.80.49 ± 0.000.98 ± 0.000.96 ± 0.000.98 ± 0.000.94 ± 0.000.49 ± 0.000.54 ± 0.000.97 ± 0.00
poly, γ =0.1, d= 3, α =10.49 ± 0.000.98 ± 0.000.96 ± 0.000.98 ± 0.000.93 ± 0.000.49 ± 0.000.56 ± 0.020.97 ± 0.00
poly, γ =0.15, d= 2, α =1.20.49 ± 0.000.98 ± 0.000.96 ± 0.000.98 ± 0.000.93 ± 0.000.49 ± 0.000.54 ± 0.010.97 ± 0.00
poly, γ =0.15, d= 3, α =0.80.49 ± 0.000.98 ± 0.000.96 ± 0.000.98 ± 0.000.93 ± 0.010.49 ± 0.000.58 ± 0.030.97 ± 0.00
poly, γ =0.15, d= 3, α =10.49 ± 0.000.98 ± 0.000.96 ± 0.000.98 ± 0.000.93 ± 0.000.49 ± 0.000.57 ± 0.010.97 ± 0.00
poly, γ =0.15, d= 3, α =1.20.49 ± 0.000.98 ± 0.000.96 ± 0.000.98 ± 0.000.93 ± 0.000.49 ± 0.000.56 ± 0.010.97 ± 0.00
rbf, γ =0.10.49 ± 0.000.98 ± 0.000.96 ± 0.000.98 ± 0.000.94 ± 0.000.49 ± 0.000.58 ± 0.040.97 ± 0.00
rbf, γ =0.150.49 ± 0.000.98 ± 0.000.96 ± 0.000.98 ± 0.000.94 ± 0.000.49 ± 0.000.60 ± 0.030.97 ± 0.00
rbf, γ =0.20.49 ± 0.000.98 ± 0.000.96 ± 0.000.98 ± 0.000.93 ± 0.010.49 ± 0.000.60 ± 0.040.97 ± 0.00
laplace, γ =0.10.60 ± 0.060.98 ± 0.000.96 ± 0.000.98 ± 0.000.93 ± 0.010.49 ± 0.000.61 ± 0.030.97 ± 0.00
laplace, γ =0.150.56 ± 0.040.98 ± 0.000.96 ± 0.000.98 ± 0.000.94 ± 0.000.49 ± 0.000.61 ± 0.010.97 ± 0.00
laplace, γ =0.20.59 ± 0.060.98 ± 0.000.96 ± 0.000.98 ± 0.000.94 ± 0.000.49 ± 0.000.62 ± 0.020.97 ± 0.00
linear0.49 ± 0.000.98 ± 0.000.96 ± 0.000.98 ± 0.000.93 ± 0.000.49 ± 0.000.54 ± 0.020.97 ± 0.00
sigmoid, γ =0.005, α =00.64 ± 0.050.97 ± 0.000.93 ± 0.010.98 ± 0.000.90 ± 0.010.49 ± 0.000.65 ± 0.050.97 ± 0.00
sigmoid, γ =0.005, α =0.010.63 ± 0.020.97 ± 0.000.94 ± 0.000.98 ± 0.000.90 ± 0.010.49 ± 0.000.65 ± 0.040.97 ± 0.00
sigmoid, γ =0.003, α =00.63 ± 0.070.97 ± 0.000.92 ± 0.010.98 ± 0.000.89 ± 0.010.49 ± 0.000.66 ± 0.070.97 ± 0.00
sigmoid, γ =0.003, α =0.010.65 ± 0.030.97 ± 0.000.92 ± 0.010.98 ± 0.000.89 ± 0.010.49 ± 0.000.65 ± 0.050.97 ± 0.00
EasyMKL0.49 ± 0.000.98 ± 0.000.96 ± 0.000.98 ± 0.000.94 ± 0.000.49 ± 0.000.57 ± 0.030.97 ± 0.00
UniformMKL0.49 ± 0.000.98 ± 0.000.96 ± 0.000.98 ± 0.000.94 ± 0.010.49 ± 0.000.58 ± 0.080.97 ± 0.00
+ +Table 8: Gender prediction from the neutralized pre-image representations using non-linear adversaries that differ from the neutralizing kernel. + +In this appendix, we provide gender prediction accuracy, on the neutralized pre-image representations, with predictors that are different from those used in training (Experiment §5.2). + +Setup. After projecting out the gender concept in kernel space, and computing the pre-image of the neutralized representations, we apply different non-linear kernels as well as an MLP to predict gender. We use the following parameters: + +- RBF: $\gamma = 0.3$ . +- Poly: $\mathrm{d} = 3, \gamma = 0.5, \alpha = 0.3$ . +- Laplace: $\gamma = 0.3$ . +- Sigmoid: $\alpha = 0, \gamma = 0.01$ . +- MLP: A network with a single 128-dimensional hidden layer with ReLU activations. + +All classifiers were trained using sklearn. + +Results. The results are shown in Table 8. Rows denote the kernel that was applied for neutralization in Eq. (10), while columns denote the type of adversarial classifier applied on the final pre-image representations. Numbers denote accuracy in gender prediction. \ No newline at end of file diff --git a/adversarialconcepterasureinkernelspace/images.zip b/adversarialconcepterasureinkernelspace/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..3f3087caa1228f287f949884c8a2004331675804 --- /dev/null +++ b/adversarialconcepterasureinkernelspace/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9141716553d76992e5bce133e382b9d4d8329e1171473137923d454e0248041a +size 1415746 diff --git a/adversarialconcepterasureinkernelspace/layout.json b/adversarialconcepterasureinkernelspace/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..514c63905ef34fd0f40ef2fb1244c2fa956a2d15 --- /dev/null +++ b/adversarialconcepterasureinkernelspace/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:106f3d950ac2a26c4e3da756ca09d0cc758a637b6db51090b2ffe82cc25e60d1 +size 725659 diff --git a/aegargumentativeessaygenerationviaadualdecodermodelwithcontentplanning/18379944-5275-4474-82f6-cea7b3442775_content_list.json b/aegargumentativeessaygenerationviaadualdecodermodelwithcontentplanning/18379944-5275-4474-82f6-cea7b3442775_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..b4e42696465c27a3d45357bbb090338fc90509e5 --- /dev/null +++ b/aegargumentativeessaygenerationviaadualdecodermodelwithcontentplanning/18379944-5275-4474-82f6-cea7b3442775_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:86bace8f7906b6be537496b18af1df25047de38a67aef4a5b36366c0d641b67f +size 111336 diff --git a/aegargumentativeessaygenerationviaadualdecodermodelwithcontentplanning/18379944-5275-4474-82f6-cea7b3442775_model.json b/aegargumentativeessaygenerationviaadualdecodermodelwithcontentplanning/18379944-5275-4474-82f6-cea7b3442775_model.json new file mode 100644 index 0000000000000000000000000000000000000000..1a01afbff145c721eac8a0be88b837087db86ca8 --- /dev/null +++ b/aegargumentativeessaygenerationviaadualdecodermodelwithcontentplanning/18379944-5275-4474-82f6-cea7b3442775_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7bbeca37aed24bd882986a254a13cda49b8781c399c326ca8f92847d9a01adde +size 137463 diff --git a/aegargumentativeessaygenerationviaadualdecodermodelwithcontentplanning/18379944-5275-4474-82f6-cea7b3442775_origin.pdf b/aegargumentativeessaygenerationviaadualdecodermodelwithcontentplanning/18379944-5275-4474-82f6-cea7b3442775_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..45fcac083089361479bd387ed02f2f61f4a38246 --- /dev/null +++ b/aegargumentativeessaygenerationviaadualdecodermodelwithcontentplanning/18379944-5275-4474-82f6-cea7b3442775_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9f1952d1ec89d89d388a187c6b954679720e6e2680d507e024ec6b59f35ca4d3 +size 610247 diff --git a/aegargumentativeessaygenerationviaadualdecodermodelwithcontentplanning/full.md b/aegargumentativeessaygenerationviaadualdecodermodelwithcontentplanning/full.md new file mode 100644 index 0000000000000000000000000000000000000000..bb09c49d1c4d0a1e39ee70401c69f20ac6dc432a --- /dev/null +++ b/aegargumentativeessaygenerationviaadualdecodermodelwithcontentplanning/full.md @@ -0,0 +1,471 @@ +# AEG: Argumentative Essay Generation via A Dual-Decoder Model with Content Planning + +Jianzhu Bao $^{1,4*}$ , Yasheng Wang $^{2}$ , Yitong Li $^{2,3}$ , Fei Mi $^{2}$ , Ruifeng Xu $^{1,4,5\dagger}$ + +1Harbin Institute of Technology, Shenzhen, China + +$^{2}$ Huawei Noah's Ark Lab + +3Huawei Technologies Co., Ltd. + +4Guangdong Provincial Key Laboratory of Novel Security Intelligence Technologies + +$^{5}$ Peng Cheng Laboratory, Shenzhen, China + +jianzhubao@gmail.com, xuruifeng@hit.edu.cn + +{wangyasheng, feimi2, liyitong3}@huawei.com + +# Abstract + +Argument generation is an important but challenging task in computational argumentation. Existing studies have mainly focused on generating individual short arguments, while research on generating long and coherent argumentative essays is still under-explored. In this paper, we propose a new task, Argumentative Essay Generation (AEG). Given a writing prompt, the goal of AEG is to automatically generate an argumentative essay with strong persuasiveness. We construct a large-scale dataset, ArgEssay, for this new task and establish a strong model based on a dual-decoder Transformer architecture. Our proposed model contains two decoders, a planning decoder (PD) and a writing decoder (WD), where PD is used to generate a sequence for essay content planning and WD incorporates the planning information to write an essay. Further, we pre-train this model on a large news dataset to enhance the plan-and-write paradigm. Automatic and human evaluation results show that our model can generate more coherent and persuasive essays with higher diversity and less repetition compared to several baselines.1 + +# 1 Introduction + +Automatic argument generation, literally the task of generating persuasive arguments on controversial issues (Toulmin, 2003; Zukerman et al., 2000), has received many research interests in recent years (Khatib et al., 2021; Schiller et al., 2021). Many works have involved different arguments generation such as the counter-arguments generation (Hua and Wang, 2018; Hua et al., 2019; Hidey and McKeown, 2019; Alshomary et al., 2021b) and the + +# Writing Prompt: + +Online education is becoming more and more popular. Some people claim that e-learning has so many benefits that it will replace face-to-face education soon. Others say that traditional education is irreplaceable. Discuss both views and give your opinion. + +# Argumentative Essay: + +Acquiring knowledge virtually has become extremely popular in the present times. While many individuals believe that there are various advantages and might overtake traditional learning in the future, a sizeable group thinks that the traditional method cannot be replaced. I believe that the use of the classroom might reduce, but it cannot be replaced. This essay will discuss both views and substantiate my view in the course of the essay. + +To commence with, virtual learning is widely implemented because it is convenient and cost-effective. It provides us with the opportunity to obtain an education without the hassle of travelling. Students, for instance, can attend classes at the comfort of their home, resulting in saving time that might have been spent on commuting in the past. Similarly, learning online can also be considered cost-efficient. Instead of spending an immense amount of college funds, we can attain the same level of qualifications at a cheaper price as it does not involve infrastructure. + +On the contrary, traditional learning offers guided learning and hands-on experience. Classroom teaching practices assist students in obtaining better study skills, such as organising and gathering reliable information due to constant interaction with the teacher, resulting in improved academic achievement. In addition, it helps in gaining practical knowledge through sessions in laboratories that are not a part of digital practices. For example, pupils are provided with constant guidance during face to face teaching along with acquiring real-time experience. + +In conclusion, although online classes might seem beneficial in terms of convenience together with being budget-friendly, classroom education provides a better learning and practical experience. Therefore, I think that face-to-face classes are not replaceable. + +Table 1: An example of our proposed Argumentative Essay Generation task. Given a writing prompt about a controversial topic, the task is to generate a well-organized argumentative essay with nice coherence and strong persuasiveness. The major claims express the topic, stance, and main idea of this essay. + +controlled arguments generation under certain topics or aspects (Gretz et al., 2020; Schiller et al., 2021; Alshomary et al., 2021a; Khatib et al., 2021). However, real-life scenarios like news editorials, competitive debating, and even television shows, are requiring more powerful ways of systematically organizing arguments in composing long-form essays or speeches that can fully express opinions and persuade the audiences. Previous studies predominantly focused on generating individual and rela + +tively short arguments, which can be weak when addressing these long-form argument generation tasks. + +In this paper, we aim with the question of how to generate and compose a comprehensive and coherent argumentative essay, which can contain multiple arguments with different aspects. This is a challenging but fundamental task, requiring much more capability of understanding human intelligence towards general artificial intelligence to fully address this problem (Slonim et al., 2021). However, with superior development of pre-training methods (Devlin et al., 2019; Brown et al., 2020; Bommasani et al., 2021), generating coherent long-form documents is touchable with reasonable qualities (Guan et al., 2021; Yu et al., 2021). Therefore, to facilitate this line of research, we introduce a new document-level generation task, Argumentative Essay Generation (AEG), which focuses on generating long-form argumentative essays with strong persuasiveness given the writing prompt. An example of AEG is shown in Table 1. In this example, the given writing prompt specifies a topic about "online education". The expected argumentative essay first introduces the topic and states the stance (paragraph 1), then justifies its point through a series of arguments (paragraphs 2-3), and finally summarizes and echos the main idea (paragraph 4). We can see that AEG requires generating relevant claims and evidences of diverse aspects relevant to a given topic, and further appropriately incorporating them in a logical manner to compose an argumentative essay. + +In order to make progress towards AEG, we construct a large-scale dataset, ArgEssay, containing 11k high-quality argumentative essays along with their corresponding writing prompts on a number of common controversial topics such as technological progress, educational methodology, environmental issues, etc. Our proposed dataset is built upon the writing task of several international standardized tests of English, such as IELTS and TOEFL, which also being studied in other tasks of automated essay scoring (Blanchard et al., 2013) and argument mining (Stab and Gurevych, 2017). Compared to previous argument generation datasets collected from social media, the essays in our dataset are more formal in wording and writing and therefore of higher quality, making our dataset a better choice for studying argument generation. + +To tackle the proposed AEG task, we adopt the + +plan-and-write paradigm for generating diverse and content-rich argumentative essays, as content planning proves to be beneficial for long-form text generation (Fan et al., 2019; Hua and Wang, 2019). We establish encoder-decoder based Transformer models with dual-decoder, which contains a planning decoder (PD) for generating keywords or relational triplets as essay content planning and a writing decoder (WD) for composing an essay guided by the planning. Adopting this dual-decoder architecture can keep planning and writing process separate to avoid mutual interference. Automatic evaluation results show that our model outperforms several strong baselines in terms of diversity and repetition. Human evaluation results further demonstrate that the essays generated by our model maintain good coherence and strong persuasiveness. We also show that our model yields better planings compared to baselines, and the content of the generated essays can be effectively controlled by the planings. In addition, the performance of our model can be further improved after being pre-trained on a large news dataset. + +We summarize our contributions as follows: + +- We propose a new task of argumentative essay generation and create a large-scale and high-quality benchmark for this task. +- We establish a Transformer-based model with dual-decoder which generates argumentative essays in a plan-and-write manner, and further improve the model performance via pretraining. +- Using both automatic and human evaluations, we demonstrate that our proposed model can generate more coherent and persuasive argumentative essays with higher diversity and less repetition rate compared to several baselines. + +# 2 Related Work + +# 2.1 Argumentative Essay Analysis + +The analysis of argumentative essays has been extensively studied in previous work since an early stage (Madnani et al., 2012; Beigman Klebanov and Flor, 2013). To comprehensively study the structure of argumentation in argumentative essays, Stab and Gurevych (2014, 2017) presented the Persuasive Essay dataset with the annotations of both argument components and argumentative relations. Based on this dataset, many subsequent researches are conducted to better parsing the argumentation + +structure in argumentative essays (Persing and Ng, 2016; Eger et al., 2017; Potash et al., 2017; Kuribayashi et al., 2019; Bao et al., 2021). + +These studies above are closely related to our work, since the analysis of the structure and quality of argumentative essays can support AEG by providing structured argument knowledge. + +# 2.2 Argument Generation + +Early work on argument generation involved a lot of hand-crafting features, such as constructing the argument knowledge base (Reed, 1999; Zukerman et al., 2000) or designing argumentation strategies (Reed et al., 1996; Carenini and Moore, 2000). + +To frame existing argumentative text into new arguments, some work employs the argument retrieval (Levy et al., 2018; Stab et al., 2018) based methods to generate arguments (Sato et al., 2015; Hua and Wang, 2018; Wachsmuth et al., 2018), while others synthesize arguments by reframing existing claims or evidences (Yanase et al., 2015; Bilu and Slonim, 2016; Baff et al., 2019). + +Recently, more attention has focused on end-to-end generation of arguments using neural models (Hua and Wang, 2018; Hidey and McKeown, 2019). Hua et al. (2019) presented a sequence-to-sequence framework enhanced by external knowledge for generating counter-arguments. Gretz et al. (2020) explored the use of a pipeline based on the pre-trained language model GPT-2 (Radford et al., 2019) to generate coherent claims. Schiller et al. (2021) developed a controllable argument generation model, which can control the topic, stance, and aspect of a generated argument. Alshomary et al. (2021a) proposed the belief-based claim generation task and leveraged conditional language models to generate arguments controlled by the prior beliefs of the audience. Khatib et al. (2021) proposed to control the generation of arguments with argumentation knowledge graphs. + +However, current argument generation research is limited to generating individual and relatively short arguments, without consideration given to the generation of long and coherent argumentative essays containing multiple aspects of arguments. + +# 2.3 Long-form text generation + +Our work is also closely related to long-form text generation research, such as story generation (Fan et al., 2018; Yao et al., 2019; Guan et al., 2020; Xu et al., 2020), data-to-text generation (Puduppully et al., 2019; Hua et al., 2021; Hua and Wang, 2020; + +Dong et al., 2021), paragraph generation (Hua and Wang, 2019; Yu et al., 2021), and essay generation (Feng et al., 2018; Yang et al., 2019; Qiao et al., 2020; Liu et al., 2021). + +Most of studies focus generating narrative texts or description texts, while we concentrate on generating argumentative essays, with more emphasis on the argumentativeness. + +# 3 Dataset Creation + +Our dataset is collected from Essay Forum, $^{2}$ an online community established by professional writers and editors to help users write, edit, and revise their essays. Specifically, we selected the essays and prompts of high-quality in the writing feedback section of Essay Forum, where users post their essays for revision suggestions in preparation for standardized English test like IELTS or TOEFL. $^{3}$ In addition, the essays in the writing feedback section have also been used in the researches on argument mining (Stab and Gurevych, 2014, 2017). + +First, we collect all the post in the writing feedback section of Essay Forum. Then, to obtain the prompt-essay pairs and ensure the text quality, we conduct several pre-processing steps including: + +- Separating the essay and the prompt in each post. For posts where the author does not mark the prompt in bold or italics, we filter them out and then process them manually; +- Filtering prompt-essay pairs with non-argumentative essays (like narrative essays, character description essays, and graphical analysis essays, etc.) by manually summarized rules (see Appendix B.1 for details.); +- Cleaning irrelevant text like special characters, user names, and expressions of thanks or greetings through rule-based deletion and manual processing (see Appendix B.2 for details.); +- Only keeping prompt-essay pairs whose essay contains less than 500 tokens (tokenized by the Stanford CoreNLP toolkit (Manning et al., 2014)) and 4 or 5 paragraphs. The reason for this procedure is that, in the writing feedback section of Essay Forum, essays that do not satisfy these aforementioned attributes are likely + +
DatasetAvg. TokensAvg. Sents
(Hua and Wang, 2018)161.107.70
(Hua et al., 2019)66.002.95
(Khatib et al., 2021)81.893.85
ArgEssay (Ours)327.3514.41
+ +Table 2: Comparison of our dataset with existing argument generation datasets. (Avg. Tokens)/(Avg. Sents) indicates the average number of tokens/sentences in the target generation text. + +not in an argumentative writing style (Stab and Gurevych, 2014); + +- Finally, manually reviewing each remaining prompt-essay pairs to filter obviously flawed essays and ensure all the essays are argumentative. + +It is worth noting that the Essay Forum administrator will review and remove any posts that are considered to be libelous, racist, or otherwise inappropriate. Thus, the ethic of our dataset can be assured. Further, we also manually check the dataset to avoid ethical issues. + +As for the data split, we want to minimize the overlap between the train set and the validation/test set in terms of prompts, otherwise it would be difficult to test the model's generalization ability on new prompts. Thus, we first extract keywords from the prompts based on TF-IDF (Salton and McGill, 1984) and measure the similarity of any two prompts as the Jaccard similarity between their keywords set. Then, when splitting the data, for any prompt in the validation/test set, we ensure that the similarity between it and each prompt in the train set does not exceed a threshold $\epsilon$ . After several rounds of manual verification, we set $\epsilon = 0.65$ as we observe that this threshold can reasonably separate the prompts with more than $70\%$ of the validation/test prompts having a similarity of less than 0.30 to any training prompt. + +The final dataset consists of 11,282 prompt-essay pairs in English, in which 9,277/1,002/1003 pairs are used for training/validation/testing, respectively. We compare our proposed dataset with existing argument generation datasets in Table 2. Our ArgEssay contains longer target text with richer content, which makes it more challenging. Also, most existing datasets are constructed from social media, while the essays in our dataset are written for the standardized English tests, which are more formal in terms of wording and structuring. + +# 4 Methods + +Our proposed AEG task can be formulated as follows: given a writing prompt $X = [x_{1}, x_{2}, \ldots, x_{m}]$ , a relevant argumentative essay $Y^{e} = [y_{1}, y_{2}, \ldots, y_{n}]$ should be generated. + +In order to generate diverse and content-rich essays, we propose a Transformer-based dual-decoder model with a plan-and-write strategy. In detail, our model first predicts a planning sequence $Y^{p}$ , then it generates the argumentative essay $Y^{e}$ under the guidance of the planning sequence through the planning attention. The planning strategy is commonly used in long-form text generation studies. Here, instead of using a standalone model for predicting the planning (Fan et al., 2019; Xu et al., 2020), we utilize a dual-decoder architecture to enable end-to-end training for generating the planning and the essay. + +In the following, we will first introduce the method of constructing planning sequence $Y^{p}$ for training and then describe our model in detail. + +# 4.1 Construction of Planning + +For flexibility, we do not strictly restrict the form of the planning, as long as it is natural language text. In this paper, we investigate two kinds of planning using on automatic methods, a keyword-based planning and a relation-based planning. + +- 1) For the keyword-based (KW) planning, we use TF-IDF (Salton and McGill, 1984) score to determine important words as keywords. We calculate the TF-IDF based on the corpus and then select words with the top- $l$ scores to construct the keyword-based planning $Y^{p} = k_{1}\# 1|k_{2}\# 2|\ldots |k_{l}\# l|$ , where $k_{i}$ is the $i$ -th keyword, “#” and “i” are special tokens, and keywords are separated by “|”. +- 2) Similarly, for the relation-based (Rel) planning, we firstly apply an off-the-shelf OpenIE (Angeli et al., 2015) to extract all the relational triplets in each essay and then random sample $l$ triplets to construct the relation-based planning $Y^{p} = s_{1}\# r_{1}\# o_{1}\# 1| \ldots |s_{l}\# r_{l}\# o_{l}\# l|$ , where $s_{i}, r_{i}$ and $o_{i}$ are subject, relation and object of the $i$ -th triplet. + +Note that, we append “ $\#i$ ” after each keyword or each relational triplet to control the length of generated planning, which has been shown to prevent + +![](images/150bebccfb2ac096711aa097704e9a72bd1a068da5314a795e7d73119626c109.jpg) +Figure 1: The architecture of our model. + +the model from generating undesired excessive or insufficient keywords/triplets (Liao et al., 2019). Here, we refer to $l$ as the planning length, and we set $l$ to 10 in our main experiments. The impact of $l$ is discussed in Section 6.5. + +# 4.2 Dual-decoder Model + +For essay generation task, we adopt the encoder-decoder architecture with a pre-trained BART backbone (Lewis et al., 2020) and extend it to a dual-decoder architecture. Figure 1 illustrates the overall architecture of the proposed dual-decoder. Overall, the proposed model consists of a shared encoder to encode the input writing prompt, a planning decoder to generate a planning sequence, and a writing decoder to write the argumentative essay. + +Shared Encoder We use the same encoder of BART as the shared encoder in our model, whose output will be utilized by both decoders. Specifically, we feed $X$ into the encoder: + +$$ +\mathbf {H} ^ {e} = \operatorname {E n c o d e r} (X) +$$ + +where $\mathbf{H}^e\in \mathbb{R}^{m\times d}$ , and $d$ is the hidden dimension. + +Planning Decoder (PD) Based on the input prompt sequence, the planning decoder serves to predict the planning that contains important information of the essay. The generated planning can help plan the perspectives or aspects to be discussed in the essay before the formal writing, as well as enrich the wording and improve the diversity of the generated essay. Adopting the planning decoder allows to keep planning and writing process separate, + +with two decoders being responsible for each. The reason behind this design is that the distribution of the planning text and the essay text are significantly different, forcing one same decoder to handle both processes can decrease the performance. + +Our planning decoder is based on the decoder of BART, whose decoding target text is $Y^{p}$ : + +$$ +\mathbf {h} _ {t} ^ {p d} = \mathrm {P D} (\mathbf {H} ^ {e}, Y _ {< t} ^ {p}) +$$ + +$$ +\hat {Y} _ {t} ^ {p} = \mathrm {S o f t m a x} (\mathbf {W} ^ {p d} \mathbf {h} _ {t} ^ {p d} + \mathbf {b} ^ {p d}) +$$ + +where $\mathbf{h}_t^{pd} \in \mathbb{R}^d$ is the hidden representation of the $t$ -th token in the generated logits $\hat{Y}_t^p$ ; $\mathbf{W}^{pd}$ and $\mathbf{b}^{pd}$ are learnable parameters. + +Each Transformer layer of the BART decoder contains three sub-layers, i.e., a self multi-head attention layer, a cross multi-head attention layer and a feed-forward layer. For the self multi-head attention sub-layer of the $j$ -th Transformer layer, we denote the keys and values matrix as $\mathbf{K}_{pd}^{(i)}$ and $\mathbf{V}_{pd}^{(i)} \in \mathbb{R}^{l \times d}$ , which will be used to guide the writing decoder subsequently. + +Writing Decoder (WD) The writing decoder can incorporates the generated planning and the input writing prompt to write an essay: + +$$ +\mathbf {h} _ {t} ^ {w d} = \mathrm {W D} (\mathbf {H} ^ {e}, \mathbf {K} _ {p d}, \mathbf {V} _ {p d}, Y _ {< t} ^ {e}) +$$ + +$$ +\hat {Y} _ {t} ^ {e} = \mathrm {S o f t m a x} (\mathbf {W} ^ {w d} \mathbf {h} _ {t} ^ {w d} + \mathbf {b} ^ {w d}) +$$ + +where $\mathbf{h}_t^{wd} \in \mathbb{R}^d$ is the hidden representation of the $t$ -th token in $Y^e$ ; $\mathbf{K}_{pd}$ and $\mathbf{V}_{pd}$ are the keys and values of all the Transformer layers of PD; $\mathbf{W}^{wd}$ and $\mathbf{b}^{wd}$ are learnable parameters. + +Here, we introduce a planning attention (PA) module that enables PD to guide WD. For each Transformer layer of the WD, we modify the self multi-head attention sub-layer to enable WD to attend all the tokens in the planning generated by PD when decoding each token of an essay. Specifically, when calculating the self multi-head attention in the $i$ -th Transformer layer of WD, we use $\mathbf{Q}_{wd}^{(i)}$ , $\mathbf{K}_{wd}^{(i)}$ and $\mathbf{V}_{wd}^{(i)}$ as the query, key and value: + +$$ +\mathbf {Q} _ {w d} ^ {(i)} = \mathbf {Q} _ {w d} ^ {(i) ^ {\prime}} +$$ + +$$ +\mathbf {K} _ {w d} ^ {(i)} = [ \mathbf {K} _ {p d} ^ {(i)} \oplus \mathbf {K} _ {w d} ^ {(i) ^ {\prime}} ] +$$ + +$$ +\mathbf {V} _ {w d} ^ {(i)} = [ \mathbf {V} _ {p d} ^ {(i)} \oplus \mathbf {V} _ {w d} ^ {(i) ^ {\prime}} ] +$$ + +where $\mathbf{Q}_{wd}^{(i)'}$ , $\mathbf{K}_{wd}^{(i)'}$ , $\mathbf{V}_{wd}^{(i)'} \in \mathbb{R}^{n \times d}$ is the original query, key, value matrix of the BART decoder Transformer layer, and $\oplus$ denotes the matrix concatenation operation in the first dimension. + +Training & Inference. During training, we use the negative log-likelihood loss: + +$$ +\mathcal {L} = \mathcal {L} _ {p} + \mathcal {L} _ {w} +$$ + +$$ +\mathcal {L} _ {p} = - \sum_ {t = 1} ^ {l} \log P \left(Y _ {t} ^ {p} \mid Y _ {< t} ^ {p}, X\right) +$$ + +$$ +\mathcal {L} _ {w} = - \sum_ {t = 1} ^ {n} \log P \left(Y _ {t} ^ {e} | Y _ {< t} ^ {e}, Y ^ {p}, X\right) +$$ + +where $\mathcal{L}_p$ and $\mathcal{L}_w$ are the loss functions for optimizing planning and writing, respectively. + +During inference, we first generating the planning sequence and then writing the essay, both of which are performed in an autoregressive manner. + +# 4.3 Pre-training + +To better adapt the model to the plan-and-write paradigm, we explore to first pre-train our model on a large news dataset, then fine-tune it on our ArgEssay dataset. In detail, we employ CNN-DailyMail (Hermann et al., 2015) as the pre-training data, which is a large-scale news dataset commonly used for summarization. We treat the highlights as the prompts and the associated news articles as the essays. Regarding the planning sequences, the keywords/tripllets are extracted from the news articles in the same way as described in Section 4.1. + +# 5 Experimental Setups + +# 5.1 Comparison Models + +We build following baselines for comparison. + +BART BART (Lewis et al., 2020) is a strong sequence-to-sequence baseline model for natural language generation, which is pre-trained on several denoising tasks. We fine-tune the pre-trained BART model on our proposed ArgEssay dataset without using any planning information. + +BART-KW Following approaches of incorporating knowledge information with the arguments in previous work (Schiller et al., 2021), we conduct a BART-KW method by concatenating each planning before the essay as the overall target for prediction. That is BART-KW first predicts the keyword planning and then generates the essay. BART-KW is also fine-tuned from BART-base. + +DD-KW For our dual-decoder (DD) models, we denote the dual-decoder model with keyword-based planning as DD-KW. Note that DD-KW is + +not pre-trained by news data but we use BART-base as the start point. Also, based on DD-KW, we implement following two models for further comparisons: + +DD-KW w/o planning-att We make an ablation of planning attention module, that is we replace the planning attention for DD-KW with the normal attention, to investigate the effectiveness of using planning to explicitly guide essay generation. Note that this model differs from BART in that the planning can influence essay generation through the encoder during training. + +DD-KW w. pre-training We apply the news pre-training on DD-KW (see Section 4.3). + +BART-Rel and DD-Rel Similar for the methods using relation-based planning, we implement four models: BART-Rel, DD-Rel, DD-Rel $w/o$ planning-att and DD-Rel $w.$ pre-training. + +# 5.2 Implementation Details + +For all models, we use the pre-trained BART-Base as the base model. Following previous work (Gretz et al., 2020; Xu et al., 2020; Khatib et al., 2021), for decoding at inference, we used a top-k sampling scheme with $k = 40$ and a temperature of 0.7. Our model is implemented in PyTorch (Paszke et al., 2019) and is trained on a NVIDIA Tesla V100 GPU. We restrict the generated text to be longer than 200 tokens. The AdamW optimizer (Kingma and Ba, 2015) is employed for parameter optimization with an initial learning rate of 3e-5. + +# 5.3 Evaluation Metrics + +Automatic Evaluation. We employ the following metrics for automatic evaluation. (1) Distinct measures the diversity of generated essays by computing the ratio of the distinct n-grams to all the generated n-grams (Li et al., 2016). (2) Novelty measures the difference between the generated essays and the training data. Specifically, following Yang et al. (2019) and Zhao et al. (2020), for each generated essay, we calculate its Jaccard similarity coefficient based on n-grams with every essay in the training set and choose the highest similarity as the novelty score. (3) Repetition measures the redundancy of the generated essay by computing the percentage of generated essays that contain at least one repeated n-gram (Shao et al., 2019). (4) BLEU (Papineni et al., 2002) computes the n-gram overlap between the generated texts and the reference texts. If the readability or fluency of the + +
ModelsDiversityQuality
Dist-3Dist-4Nov-1(↓)Nov-2(↓)Rep-3(↓)Rep-4(↓)BLEU-4
BART46.6870.4326.739.4519.043.096.85
BART-KW48.9572.1826.679.3117.242.896.74
DD-KW50.0772.72†26.319.29†16.872.556.81
w/o planning-att47.1370.7626.789.4318.74†2.516.79
w. pre-training51.3573.7126.26†9.2116.752.396.94
BART-Rel47.4571.3927.419.4821.143.296.72
DD-Rel49.1072.5526.999.3419.242.676.83
w/o planning-att47.1670.6326.789.4619.343.09†6.93
w. pre-training†51.11†73.5726.759.2019.182.396.84
+ +Table 3: Automatic evaluation results [%]. Dist-n, Nov-n, Rep-n and BLEU-n denote the distinct, novelty, repetition and BLEU based on n-gram. The best score is in bold. $\dagger$ indicates the second best result. + +
ModelsRel.Coh.Cont.
BART3.272.833.09
BART-KW3.312.713.31
DD-KW3.602.833.42
w. pre-training3.633.053.49
BART-Rel3.272.783.29
DD-Rel3.592.823.36
w. pre-training3.603.063.43
+ +Table 4: Human evaluation results. Rel., Coh. and Cont. indicate relevance, coherence and content richness, respectively. + +generated essay is poor, its BLEU score will be extremely low. Hence, we provide the BLEU score as a reference to assess the essay's quality. + +Here, distinct and novelty are used for assessing diversity, while repetition and BLEU are used for assessing quality. + +Human Evaluation. For a more comprehensive analysis, we conduct human evaluations that contain three aspects. (1) Relevance evaluates whether the entire content of the generated essay is semantically relevant to the given writing prompt, which is a basic requirement for a qualified argumentative essay. (2) Coherence indicates whether the generated essay is logically consistent and reasonable in terms of semantic and causal dependencies in the context, which is closely related to the persuasiveness of an argumentative essay. (3) Content Richness measures the amount of distinct relevant aspects covered in the generated essay, which is a significant characteristic of argumentative essays. + +All three aspects are expected to be scored from + +1 (worst) to 5 (best). We randomly sampled 50 writing prompts from the test set. Each annotation item contains the input writing prompt and the generated essays of different models. We assign 3 annotators for each item who are not aware of which model the generated essays come from. + +# 6 Results and Analysis + +# 6.1 Automatic Evaluation + +Table 3 shows the automatic evaluation results. Compared to BART, our proposed DD-KW and DD-Rel achieve significantly better distinct scores and moderately better repetition and novelty scores. BART-KW and BART-Rel are worse in distinct, repetition, and novelty than DD-KW and DD-Rel, showing the effectiveness of the dual-decoder architecture. Also, removing the planning attention (w/o planning-att) decreases the distinct and repetition scores. Regarding the BLEU scores, DD-KW and DD-Rel perform similar to BART, indicating that the dual-decoder architecture does not degrade the readability and fluency of the generated essays. In addition, incorporating pre-training into our dual-decoder models can further boost the performance, showing that pre-training can enhance this plan-and-write generation paradigm. The average length of the essays generated by each models is around 290-300. + +Overall, with the support of the dual-decoder architecture and the pre-training strategy, our model can generate more diverse and less repetitive essays at the same time maintaining good readability and fluency. + +
ModelsRec.Rep.(↓)Inv.(↓)Rel.
BART-KW18.066.45-77.40
DD-KW19.411.80-82.00
w. pre-training23.951.01-84.80
BART-Rel14.81-1.7672.20
DD-Rel15.05-0.8576.60
w. pre-training15.43-0.4078.40
+ +# 6.2 Human Evaluation + +The results of human evaluation are presented in Table 4. The average Fleiss' kappa is 0.42. Regarding relevance, BART, BART-KW, and BART-Rel perform poorly because of the topic drift problem, that is, the generated essay is barely relevant to the given topic (see case study in Appendix C for details). Compared to BART, all other models with planning achieve better content richness score, since the generated planning can provide more diverse aspects information and guide the models to write essays containing more examples or perspectives. Also, the pre-training strategy can bring significant improvement to coherence. + +# 6.3 Planning Quality + +We measure the quality of the generated plannings from the following aspects: (1) Recall: evaluates how many keywords/triplets in the oracle planning sequence are predicted. (2) Keyword Repetition: (only for keyword-based planning) measures how many keywords in the generated planning sequence are repeated at least once. (3) Invalidity: (only for relation-based planning) measures how many generated triplets is invalid, i.e., not in the form described in Section 4.1. (4) Planning Relevance: evaluates whether each predicted keyword/triplets is relevant to the prompt, and is obtained by manually analysis of 50 randomly selected samples. + +As shown in Table 5, simply using a single decoder to generate the planning and the essay together (BART-KW and BART-Rel) causes the problem of high keyword repetition or high invalidity rate. In contrast, employing an individual planning decoder (DD-KW and DD-Rel) not only improves both the recall and the planning relevance, but also alleviates the keyword repetition or invalidity problem. Moreover, we can also observe that the planning quality can further be refined by pre-training our dual-decoder models. + +Table 5: Planning quality evaluation [%]. Rec., Rep., Inv. and Rel. indicate recall, keyword repetition, invalidity and planning relevance, respectively. + +
ModelsAppearanceAppropriateness
BART-KW63.7266.80
DD-KW66.6371.60
w/o planning-att43.6647.60
w. pre-training72.5873.20
BART-Rel43.4343.40
DD-Rel51.3152.40
w/o planning-att19.0137.20
w. pre-training52.9957.40
+ +Table 6: Controllability evaluation [%]. + +![](images/84d15f512ed5992a3b71b4c24a1681104382e225279ade6302de3fbf78355b53.jpg) +Figure 2: Impact of the length of planning. + +# 6.4 Controllability Evaluation + +To evaluate how well the generated essay can be controlled by the planning, we measure whether each keyword/triplet appear in the generated essay (Appearance). Also, we manually check 50 generated samples and determine whether the information enclosed by each keyword/triplet is appropriately used (Appropriateness). As shown in Table 6, BART-KW and BART-Rel achieve low appearance and appropriateness, while our dual-decoder models (DD-KW and DD-Rel) give significantly better results. With pre-training, around $73.20\% / 57.40\%$ keywords/triplets are appropriately adopted by the writing decoder, showing a high controllability. Besides, removing the planning attention module (w/o planning-att) decreases both appearance and appropriateness dramatically. + +# 6.5 Impact of the Planning Length + +On top of the models with keyword-based planning, we further investigate the impact of the planning length $l$ on the diversity (Dist-4) and accuracy (BLEU-4). As shown in Figure 2, for all models, as the planning length grows, the diversity increases, but the accuracy decreases. By manual review, we find that the readability of essays becomes extremely poor (low fluency and high repetition) when BLEU-4 is less than about 6.3. Thus, selecting a proper planning length is crucial for generating essays that are both diverse and readable. + +Nevertheless, our pre-trained dual-decoder model (DD-KW w. pre-training) can not only achieve better diversity with an appropriate planning length, but also ensure better readability than baselines even under extreme conditions. + +# 7 Conclusion + +In this paper, we propose a challenging new task, AEG, to generate long-form and coherent argumentative essays. To tackle this task, we present a large-scale dataset and further devise a dual-decoder architecture based on the basis of BART, which can generate a planning and a planning-guided essay in an end-to-end fashion. The experimental results demonstrate the superiority of our model. For future work, we plan to draw on external knowledge to generate more diverse and informative argumentative essays. + +# Limitations + +First, as discussed in Appendix C, there is still an undeniable gap between generated essays and human written essays in terms of logical coherence. In our method, we do not design mechanisms to ensure factual and causal logicality of the generated essays, which remains a great challenge. Hence, future work could consider improving the logical coherence of the generated essays by using external knowledge or causal inference techniques. + +Second, although our dual-decoder architecture enables content planning and generates better essays, it also introduces some new parameters and computations. Future work could thus investigate more efficient methods with fewer model parameters. + +# Ethics Statement + +Our dataset is collected from publicly available sources without any personal identity characteristics. When crawling data from the online platform "essayforum.com", we carefully read and follow the privacy policy, terms of use of this platform. According to the agreement of this platform, any content in it can be accessed and used with an indication of the source. + +Since the administrators of the online platform we use will review and remove any posts that are considered to be libelous, racist, or otherwise inappropriate, the ethic of our dataset can be assured. We also manually double-check each sample in our dataset to confirm that no ethical issues exist. + +# Acknowledgments + +This work was partially supported by the National Natural Science Foundation of China 62006062 and 62176076, Shenzhen Foundational Research Funding JCYJ20200109113441941, JCYJ20210324115614039, the Major Key Project of PCL2021A06, Guangdong Provincial Key Laboratory of Novel Security Intelligence Technologies 2022B1212010005. + +# References + +Milad Alshomary, Wei-Fan Chen, Timon Gurcke, and Henning Wachsmuth. 2021a. Belief-based generation of argumentative claims. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, EACL 2021, Online, April 19 - 23, 2021, pages 224-233. Association for Computational Linguistics. +Milad Alshomary, Shahbaz Syed, Arkajit Dhar, Martin Potthast, and Henning Wachsmuth. 2021b. Counterargument generation by attacking weak premises. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 1816-1827, Online. Association for Computational Linguistics. +Gabor Angeli, Melvin Jose Johnson Premkumar, and Christopher D. Manning. 2015. Leveraging linguistic structure for open domain information extraction. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, ACL 2015, July 26-31, 2015, Beijing, China, Volume 1: Long Papers, pages 344-354. The Association for Computer Linguistics. +Roxanne El Baff, Henning Wachsmuth, Khalid Al Khatib, Manfred Stede, and Benno Stein. 2019. Computational argumentation synthesis as a language modeling task. In Proceedings of the 12th International Conference on Natural Language Generation, INLG 2019, Tokyo, Japan, October 29 - November 1, 2019, pages 54-64. Association for Computational Linguistics. +Jianzhu Bao, Chuang Fan, Jipeng Wu, Yixue Dang, Ji-achen Du, and Ruifeng Xu. 2021. A neural transition-based model for argumentation mining. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 6354-6364. Association for Computational Linguistics. +Beata Beigman Klebanov and Michael Flor. 2013. Argumentation-relevant metaphors in test-taker essays. In Proceedings of the First Workshop on Metaphor in NLP, pages 11-20, Atlanta, Georgia. Association for Computational Linguistics. + +Yonatan Bilu and Noam Slonim. 2016. Claim synthesis via predicate recycling. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 2: Short Papers. The Association for Computer Linguistics. +Daniel Blanchard, Joel Tetreault, Derrick Higgins, Aoife Cahill, and Martin Chodorow. 2013. Toeff11: A corpus of non-native english. ETS Research Report Series, 2013:i-15. +Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri S. Chatterji, Annie S. Chen, Kathleen Creel, Jared Quincy Davis, Dorottya Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren Gillespie, Karan Goel, Noah D. Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, Omar Khattab, Pang Wei Koh, Mark S. Krass, Ranjay Krishna, Rohith Kuditipudi, and et al. 2021. On the opportunities and risks of foundation models. CoRR, abs/2108.07258. +Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. +Giuseppe Carenini and Johanna D. Moore. 2000. A strategy for generating evaluative arguments. In INLG 2000 - Proceedings of the First International Natural Language Generation Conference, June 12-16, 2000, Mitzpe Ramon, Israel, pages 47-54. The Association for Computer Linguistics. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171-4186. Association for Computational Linguistics. + +Xiangyu Dong, Wenhao Yu, Chenguang Zhu, and Meng Jiang. 2021. Injecting entity types into entity-guided text generation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November; 2021, pages 734-741. Association for Computational Linguistics. +Steffen Eger, Johannes Daxenberger, and Iryna Gurevych. 2017. Neural end-to-end learning for computational argumentation mining. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 11-22. Association for Computational Linguistics. +Angela Fan, Mike Lewis, and Yann N. Dauphin. 2018. Hierarchical neural story generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 889-898. Association for Computational Linguistics. +Angela Fan, Mike Lewis, and Yann N. Dauphin. 2019. Strategies for structuring story generation. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 2650-2660. Association for Computational Linguistics. +Xiaocheng Feng, Ming Liu, Jiahao Liu, Bing Qin, Yibo Sun, and Ting Liu. 2018. Topic-to-essay generation with neural networks. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden, pages 4078-4084. ijcai.org. +Shai Gretz, Yonatan Bilu, Edo Cohen-Karlik, and Noam Slonim. 2020. The workweek is the best time to start a family - A study of GPT-2 based claim generation. In Findings of the Association for Computational Linguistics: EMNLP 2020, Online Event, 16-20 November 2020, volume EMNLP 2020 of Findings of ACL, pages 528-544. Association for Computational Linguistics. +Jian Guan, Fei Huang, Minlie Huang, Zhihao Zhao, and Xiaoyan Zhu. 2020. A knowledge-enhanced pretraining model for commonsense story generation. Trans. Assoc. Comput. Linguistics, 8:93-108. +Jian Guan, Xiaoxi Mao, Changjie Fan, Zitao Liu, Wenbiao Ding, and Minlie Huang. 2021. Long text generation by modeling sentence-level and discourse-level coherence. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 6379-6393. Association for Computational Linguistics. + +Karl Moritz Hermann, Tomás Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 1693-1701. +Christopher Hidey and Kathy McKeown. 2019. Fixed that for you: Generating contrastive claims with semantic edits. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 1756-1767. Association for Computational Linguistics. +Xinyu Hua, Zhe Hu, and Lu Wang. 2019. Argument generation with retrieval, planning, and realization. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 2661-2672. Association for Computational Linguistics. +Xinyu Hua, Ashwin Sreevatsa, and Lu Wang. 2021. DYPLOC: dynamic planning of content using mixed language models for text generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 6408-6423. Association for Computational Linguistics. +Xinyu Hua and Lu Wang. 2018. Neural argument generation augmented with externally retrieved evidence. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 219-230. Association for Computational Linguistics. +Xinyu Hua and Lu Wang. 2019. Sentence-level content planning and style specification for neural text generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 591-602. Association for Computational Linguistics. +Xinyu Hua and Lu Wang. 2020. PAIR: planning and iterative refinement in pre-trained transformers for long text generation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 781-793. Association for Computational Linguistics. +Khalid Al Khatib, Lukas Trautner, Henning Wachsmuth, Yufang Hou, and Benno Stein. 2021. Employing argumentation knowledge graphs for neural argument + +generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 4744-4754. Association for Computational Linguistics. +Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. +Tatsuki Kuribayashi, Hiroki Ouchi, Naoya Inoue, Paul Reisert, Toshinori Miyoshi, Jun Suzuki, and Kentaro Inui. 2019. An empirical study of span representations in argumentation structure parsing. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 4691-4698. Association for Computational Linguistics. +Ran Levy, Ben Boin, Shai Gretz, Ranit Aharonov, and Noam Slonim. 2018. Towards an argumentative content search engine using weak supervision. In Proceedings of the 27th International Conference on Computational Linguistics, COLING 2018, Santa Fe, New Mexico, USA, August 20-26, 2018, pages 2066-2081. Association for Computational Linguistics. +Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics. +Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objective function for neural conversation models. In *NAACL HLT* 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego California, USA, June 12-17, 2016, pages 110-119. The Association for Computational Linguistics. +Yi Liao, Yasheng Wang, Qun Liu, and Xin Jiang. 2019. Gpt-based generation for classical chinese poetry. CoRR, abs/1907.00151. +Zhiyue Liu, Jiahai Wang, and Zhenghong Li. 2021. Topic-to-essay generation with comprehensive knowledge enhancement. In Machine Learning and Knowledge Discovery in Databases. Applied Data Science Track - European Conference, ECML PKDD 2021, Bilbao, Spain, September 13-17, 2021, Proceedings, Part V, volume 12979 of Lecture Notes in Computer Science, pages 302-318. Springer. + +Nitin Madnani, Michael Heilman, Joel R. Tetreault, and Martin Chodorow. 2012. Identifying high-level organizational elements in argumentative discourse. In Human Language Technologies: Conference of the North American Chapter of the Association of Computational Linguistics, Proceedings, June 3-8, 2012, Montréal, Canada, pages 20-28. The Association for Computational Linguistics. +Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, and David McClosky. 2014. The stanford corenlp natural language processing toolkit. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014, June 22-27, 2014, Baltimore, MD, USA, System Demonstrations, pages 55-60. The Association for Computer Linguistics. +Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, July 6-12, 2002, Philadelphia, PA, USA, pages 311-318. ACL. +Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 8024-8035. +Isaac Persing and Vincent Ng. 2016. End-to-end argumentation mining in student essays. In *NAACL HLT* 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego California, USA, June 12-17, 2016, pages 1384-1394. The Association for Computational Linguistics. +Peter Potash, Alexey Romanov, and Anna Rumshisky. 2017. Here's my point: Joint pointer architecture for argument mining. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 1364-1373. Association for Computational Linguistics. +Ratish Puduppully, Li Dong, and Mirella Lapata. 2019. Data-to-text generation with content selection and planning. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, pages 6908-6915. AAAI Press. + +Lin Qiao, Jianhao Yan, Fandong Meng, Zhendong Yang, and Jie Zhou. 2020. A sentiment-controllable topic-to-essay generator with topic knowledge graph. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, Online Event, 16-20 November 2020, volume EMNLP 2020 of *Findings of ACL*, pages 3336-3344. Association for Computational Linguistics. +Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9. +Chris Reed. 1999. The role of saliency in generating natural language arguments. In Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence, IJCAI 99, Stockholm, Sweden, July 31 - August 6, 1999. 2 Volumes, 1450 pages, pages 876-883. Morgan Kaufmann. +Chris Reed, Derek Long, and Maria Fox. 1996. An architecture for argumentative dialogue planning. In International Conference on Formal and Applied Practical Reasoning, pages 555-566. Springer. +Gerard Salton and Michael McGill. 1984. Introduction to Modern Information Retrieval. McGraw-Hill Book Company. +Misa Sato, Kohsuke Yanai, Toshinori Miyoshi, Toshihiko Yanase, Makoto Iwayama, Qinghua Sun, and Yoshiki Niwa. 2015. End-to-end argument generation system in debating. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, ACL 2015, July 26-31, 2015, Beijing, China, System Demonstrations, pages 109-114. The Association for Computer Linguistics. +Benjamin Schiller, Johannes Daxenberger, and Iryna Gurevych. 2021. Aspect-controlled neural argument generation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 380-396. Association for Computational Linguistics. +Zhihong Shao, Minlie Huang, Jiangtao Wen, Wenfei Xu, and Xiaoyan Zhu. 2019. Long and diverse text generation with planning-based hierarchical variational model. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 3255-3266. Association for Computational Linguistics. +Noam Slonim, Yonatan Bilu, Carlos Alzate, Roy Bar-Haim, Ben Bogin, Francesca Bonin, Leshem Choshen, Edo Cohen-Karlik, Lena Dankin, Lilach Edelstein, et al. 2021. An autonomous debating system. Nature, 591(7850):379-384. + +Christian Stab, Johannes Daxenberger, Chris Stahlhut, Tristan Miller, Benjamin Schiller, Christopher Tauchmann, Steffen Eger, and Iryna Gurevych. 2018. Argument: Searching for arguments in heterogeneous sources. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 2-4, 2018, Demonstrations, pages 21-25. Association for Computational Linguistics. +Christian Stab and Iryna Gurevych. 2014. Annotating argument components and relations in persuasive essays. In COLING 2014, 25th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers, August 23-29, 2014, Dublin, Ireland, pages 1501-1510. ACL. +Christian Stab and Iryna Gurevych. 2017. Parsing argumentation structures in persuasive essays. Comput. Linguistics, 43(3):619-659. +Stephen E. Toulmin. 2003. The Uses of Argument, 2 edition. Cambridge University Press. +Henning Wachsmuth, Shahbaz Syed, and Benno Stein. 2018. Retrieval of the best counterargument without prior topic knowledge. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 241-251. Association for Computational Linguistics. +Peng Xu, Mostofa Patwary, Mohammad Shoeybi, Raul Puri, Pascale Fung, Anima Anandkumar, and Bryan Catanzaro. 2020. MEGATRON-CNTRL: controllable story generation with external knowledge using large-scale language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 2831-2845. Association for Computational Linguistics. +Toshihiko Yanase, Toshinori Miyoshi, Kohsuke Yanai, Misa Sato, Makoto Iwayama, Yoshiki Niwa, Paul Reisert, and Kentaro Inui. 2015. Learning sentence ordering for opinion generation of debate. In Proceedings of the 2nd Workshop on Argumentation Mining, ArgMining@HLT-NAACL 2015, June 4, 2015, Denver, Colorado, USA, pages 94–103. The Association for Computational Linguistics. +Pengcheng Yang, Lei Li, Fuli Luo, Tianyu Liu, and Xu Sun. 2019. Enhancing topic-to-essay generation with external commonsense knowledge. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Papers, pages 2002-2012. Association for Computational Linguistics. +Lili Yao, Nanyun Peng, Ralph M. Weischedel, Kevin Knight, Dongyan Zhao, and Rui Yan. 2019. Plan-and-write: Towards better automatic storytelling. In + +The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, pages 7378-7385. AAAI Press. +Wenhao Yu, Chenguang Zhu, Tong Zhao, Zhichun Guo, and Meng Jiang. 2021. Sentence-permuted paragraph generation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November; 2021, pages 5051-5062. Association for Computational Linguistics. +Liang Zhao, Jingjing Xu, Junyang Lin, Yichang Zhang, Hongxia Yang, and Xu Sun. 2020. Graph-based multi-hop reasoning for long text generation. CoRR, abs/2009.13282. +Ingrid Zukerman, Richard McConachy, and Sarah George. 2000. Using argumentation strategies in automated argument generation. In INLG 2000 - Proceedings of the First International Natural Language Generation Conference, June 12-16, 2000, Mitzpe Ramon, Israel, pages 55-62. The Association for Computer Linguistics. + +# Appendices + +# A An Example Post from Essay Forum + +An example post from the writing feed-back section of the Essay Forum platform is shown in Figure 3. + +# B Rules For Data Pre-process + +# B.1 Filtering Details + +- Removing prompt-essay pairs which are from IELTS writing task 1 by checking for the presence of keywords like "bar", "chart", "diagram" and "task 1" in the prompts, since essays in these samples are graphical analysis essays without argumentativeness. +- Removing prompt-essay pairs about narrative, character description or letter by checking the keywords such as "describe", "describing", "letter", "narrative", "summary", etc. + +# B.2 Data Cleaning Details + +- Deleting special characters like "=", "*", "#", + "+", etc. +- Select out prompt-essay pairs that contain irrelevant text expressing gratitude, asking for help, greeting, or self-introduction by keywords like "please", "pls", "grammar", "hello", "feedback", "grammar", "comment", "my name", "my essay", "thank", "appreciated", etc. Then manually checking and deleting these irrelevant text. + +# C Case Study + +Table 7 demonstrates several sample outputs from different models for the writing prompt about "multinational company". We only show a snippet of each essay, which is taken from a similar location in the context. + +We can see that BART and BART-KW show different degrees of topic drift problem, i.e., the generated text is less relevant to the given topic of "multinational corporations". In contrast, the models with dual decoders avoid this problem by better generating and utilizing the essay content plannings. Regarding the planning generation, BART-KW suffers from generating planning with repeated keywords. Also, as can be seen, the pre-trained dual-decoder models can better leverage the planning to guide the essay generation. For example, the generated essay + +![](images/d7904c4d0391099fa5194ce3e2f98e1e8dc7919c2bcba42b00aa19e929c2f356.jpg) +Figure 3: An example post from Essay Forum. + +of (DD-KW w. pre-training) can coherently embody the keywords "employees", "coexist", "corporations", etc. By comparison, the DD-KW without pre-training performs slightly worse on this point. + +However, it is undeniable that there is still a large gap between the performance of our models and that of humans, which we plan to investigate in future work. For example, the human generated essay in Table 7 is significantly better than the all the model generated essays in terms of logical coherence. + +Writing Prompt: Nowadays, most large companies operate multinationally. To what extent those companies should have responsibility towards local communities in which they operate? + +# Human: + +Keywords: local # 1 | native # 2 | firms # 3 | royal # 4 | respect # 5 | rivers # 6 | operations # 7 | indigenous # 8 | international # 9 | ecosystem # 10 | Triplets: global operations # respect # national customs # 1 | firms # should discover # insights # 2 | operations # stay away # taboos # 3 | (...) Essay: (...) More importantly, global operations ought to respect the national customs where they set up. In other words, the firms should discover insights into the native cultures and deter themselves from offending indigenous people or what they respect. For example, when running business in Thailand, the organizations cannot advertise their products by using pictures of royal Kings as Thai citizens extremely respect royal family. Therefore, international operations should develop carefully and stay away from some taboos, which offend the inhabitants. (...) + +# BART: + +Essay: (...) Secondly, the public of the city can know about the local communities. They should be aware of the situations which they are dealing with. To elaborate, the local governments should make a better understanding of the cultures of the districts and they should also pay attention to the surrounding communities. Furthermore, they should be a part of the local companies' culture. (...) + +# BART-KW: + +Keywords: operate # 1 | companies # 2 | corporations # 3 | localities # 4 | local # 5 | multination # 6 | multi # 7 | multi # 8 | responsibility # 9 | response # 10 | + +Essay: (...) Additionally, the governments should also take a responsibility for the local companies. As the workers are being recruited more and more, the profits from these organizations will increase. For example, the factories can operate in different areas, in order for the workers to get a better life. Moreover, the workers will be more motivated to work more and have more opportunities to earn more income. (...) + +# DD-KW: + +Keywords: multinational # 1 | teamwork # 2 | scale # 3 | workers # 4 | operate # 5 | company # 6 | local # 7 | ethos # 8 | collaboration # 9 | businesses # 10 | + +Essay: (...) On the other hand, multinational companies have a large number of employees. Many of them spend most of their time in a different place and do not have the time to engage with the other employees. Moreover, The employees are boundlessly affected by the cultures, traditions, customs, and traditions. Therefore it is necessary for them to take a team effort to build a good relationship with the others and make sure that they have a good collaboration. (...) + +# DD-KW w. pre-training: + +Keywords: operate # 1 | profit # 2 | corporations # 3 | coexist # 4 | cities # 5 | multinational # 6 | local # 7 | responsibility # 8 | employees # 9 | development # 10 | + +Essay: (...) On the other hand, the main responsibility of the large multinational corporations is to coexist with the communities in the local areas, and they should share the costs and amenities of their business. In other words, they should be aware of the needs of their employees and the benefits of their job. Furthermore, they have to offer equal opportunities to all of their staffs. For instance, in Bangladesh, the government has made it mandatory for the companies to co-operate with the local community in order to facilitate the overall development of their businesses. (...) + +# BART-Rel: + +Triplets: factories # have responsibility # to # provide # 1 | companies # invest # money # 1 # 2 | local businesses # have # responsibility # 3 | (...). Essay: (...) Furthermore, it also can be argued if the local government has more responsibility for the well-being of their citizens. In some cases, they could not provide the basic needs for their citizens, such as food and clothing. If a company only has a little staff, they have to make sure their local residents have healthy food and a stable living environment. (...) + +# DD-Rel: + +Triplets: companies # are # able # 1 | their work # is with # their employee # 2 | many large enterprises # operate # multinationally # 3 | (... ) Essay: (...) I think that a large company would have some responsibilities in the areas where they operate. Many multinational companies operate multilaterally, so they should have a responsibility towards the areas in which they operate in. In other words, they should be accountable for the way they operate from one part of the world to another. If these companies were able to build a business with their employees, they would have the ability to control the way that they operate, which would create a big impact on their business. (...) + +# DD-Rel w. pre-training: + +Triplets: companies # should have # their efforts # 1 | their efforts # is with # different initiatives # 2 | local people # is in # their local areas # 3 | most large firms # operate # multinationally # 4 | (... + +Essay: (...) On the other hand, these multinationally large firms should have their own efforts regarding the local community. It is believed that their efforts can improve the situation of the community and people in their local areas. For instance, it could be better to invest in infrastructure that could improve the lives of the residents. Moreover, it allows them to start their efforts with different initiatives that would help them to increase their efficiency. (...) + +Table 7: Case study. \ No newline at end of file diff --git a/aegargumentativeessaygenerationviaadualdecodermodelwithcontentplanning/images.zip b/aegargumentativeessaygenerationviaadualdecodermodelwithcontentplanning/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..702659fdea83d969c0cb5244299ac6207eac31ba --- /dev/null +++ b/aegargumentativeessaygenerationviaadualdecodermodelwithcontentplanning/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:65f712a81a7d74b3fdde11e6fd5e3b41b3190ad0da1ca27037136c8cefd502b6 +size 338956 diff --git a/aegargumentativeessaygenerationviaadualdecodermodelwithcontentplanning/layout.json b/aegargumentativeessaygenerationviaadualdecodermodelwithcontentplanning/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..1ea301bb977a38b48cdbf5e0c329a848a458ebed --- /dev/null +++ b/aegargumentativeessaygenerationviaadualdecodermodelwithcontentplanning/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9c8bb2236e1e4dcdb21666b3557a0065178bded231ebfaf979ff72c2d2d6ffc0 +size 480598 diff --git a/afederatedapproachtopredictingemojisinhinditweets/dfda2b01-9cb4-4d24-acf4-c3847f5682e4_content_list.json b/afederatedapproachtopredictingemojisinhinditweets/dfda2b01-9cb4-4d24-acf4-c3847f5682e4_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..3c6df54a71c33f0ac22ff8ad9aa849ced0141456 --- /dev/null +++ b/afederatedapproachtopredictingemojisinhinditweets/dfda2b01-9cb4-4d24-acf4-c3847f5682e4_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ece786971d372ef7123099c353242a6408a84777b2f7e3da60e3c908c167e278 +size 74828 diff --git a/afederatedapproachtopredictingemojisinhinditweets/dfda2b01-9cb4-4d24-acf4-c3847f5682e4_model.json b/afederatedapproachtopredictingemojisinhinditweets/dfda2b01-9cb4-4d24-acf4-c3847f5682e4_model.json new file mode 100644 index 0000000000000000000000000000000000000000..b79f56eeafec20aa4bbc2eb8926dbd5777fa45ae --- /dev/null +++ b/afederatedapproachtopredictingemojisinhinditweets/dfda2b01-9cb4-4d24-acf4-c3847f5682e4_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1b2d30b8820f6108bde7a98c88d795700b0b7f017a8e082b2d0dc86c81d6674b +size 87166 diff --git a/afederatedapproachtopredictingemojisinhinditweets/dfda2b01-9cb4-4d24-acf4-c3847f5682e4_origin.pdf b/afederatedapproachtopredictingemojisinhinditweets/dfda2b01-9cb4-4d24-acf4-c3847f5682e4_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..c31dc9e830aec96342e7ac13db0cb2a7225d54d6 --- /dev/null +++ b/afederatedapproachtopredictingemojisinhinditweets/dfda2b01-9cb4-4d24-acf4-c3847f5682e4_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:89ec9276814b175de50545dbade798749955a9393966f0a66a846b76f0d7dd21 +size 6651359 diff --git a/afederatedapproachtopredictingemojisinhinditweets/full.md b/afederatedapproachtopredictingemojisinhinditweets/full.md new file mode 100644 index 0000000000000000000000000000000000000000..c94143e4bc414fbba5a04470d6ea95f65bbaee30 --- /dev/null +++ b/afederatedapproachtopredictingemojisinhinditweets/full.md @@ -0,0 +1,296 @@ +# A Federated Approach to Predicting Emojis in Hindi Tweets + +Deep Gandhi\*, Jash Mehta\*, Nirali Parekh\*, Karan Waghela\*, Lynette D'Mello\*, Zeerak Talat\* + +1University of Alberta, 2Georgia Institute of Technology 3Stanford University + +4Santa Clara University, 5DJ Sanghvi College of Engineering, 6Simon Fraser University + +$^{1}$ drgandhi@ualberta.ca, $^{2}$ jmehta73@gatech.edu, $^{3}$ nirali25@stanford.edu $^{4}$ kwaghela@scu.edu, $^{5}$ lynette.dmello@djsce.ac.in, $^{6}$ zeerak_talat@sfu.ca + +# Abstract + +The use of emojis affords a visual modality to, often private, textual communication. The task of predicting emojis however provides a challenge for machine learning as emoji use tends to cluster into the frequently used and the rarely used emojis. Much of the machine learning research on emoji use has focused on high resource languages and has conceptualised the task of predicting emojis around traditional server-side machine learning approaches. However, traditional machine learning approaches for private communication can introduce privacy concerns, as these approaches require all data to be transmitted to a central storage. In this paper, we seek to address the dual concerns of emphasising high resource languages for emoji prediction and risking the privacy of people's data. We introduce a new dataset of 118k tweets (augmented from 25k unique tweets) for emoji prediction in Hindi, $^{1}$ and propose a modification to the federated learning algorithm, CausalFedGSD, which aims to strike a balance between model performance and user privacy. We show that our approach obtains comparative scores with more complex centralised models while reducing the amount of data required to optimise the models and minimising risks to user privacy. + +# 1 Introduction + +Since the creation of emojis around the turn of the millennium (Stark and Crawford, 2015; Alshenqeeti, 2016), they have become of a staple of informal textual communication, expressing emotion and intent in written text (Barbieri et al., 2018b). This development in communication style has prompted research into emoji analysis and prediction for English (e.g. Barbieri et al., 2018a,b; Felbo et al., 2017; Tomihira et al., 2020; Zhang + +et al., 2020). Comparatively little research attention has been given to the low resource languages. + +Emoji-prediction has posed a challenge for the research community because emojis express multiple modalities, contain visual semantics and the ability to stand in place for words (Padilla López and Cap, 2017). The challenge is further compounded by the quantity of emojis sent and the imbalanced distribution of emoji use (Cappallo et al., 2018; Padilla López and Cap, 2017). Machine learning (ML) for emoji analysis and prediction has traditionally relied on traditional server-side architectures. However, training such models risks leaking sensitive information that may co-occur with emojis or be expressed through them. This can lead to potential breaches of data privacy regulation (e.g. the European General Data Protection Regulation and the California Consumer Privacy Act). In contrast, federated learning (FL) (McMahan et al., 2017) approaches the task of training machine learning models by emphasising privacy of data. Such privacy is ensured by training models locally and sharing weight updates, rather than the data, with a central server (see Figure 1). The FL approach assumes that some client-updates may be corrupted during transmission. FL therefore aims to retain predictive performance while emphasising user privacy in scenarios with potential data loss. + +Motivated by prior work in privacy preserving ML (e.g. Ramaswamy et al., 2019; Yang et al., 2018) and emoji prediction for low resource languages (e.g. Choudhary et al., 2018b), we examine the application of FL to emoji topic prediction for Hindi. Specifically, we collect an imbalanced dataset of 118,030 tweets in Hindi which contain 700 unique emojis that we classify into 10 pre-defined categories of emojis. The dataset contains 700 unique emojis, that we classify into 10 pre-defined categories of emojis. We further + +![](images/368e179b349ee7ace545ee2749d661645c465c0ed471458d36bc11fddffd503a.jpg) +Figure 1: The Federated Learning process: (A) client devices compute weight updates on locally stored data, (B) client weight updates are transmitted to the server and used to update the global model, (C) the resulting global model is redistributed to all clients. + +examine the impact of two different data balancing strategies on federated and server-side, centralised model performance. Specifically, we examine: re-sampling and cost-sensitive re-weighting. We consider 6 centralised models which form our baselines: Bi-directional LSTM (Hochreiter and Schmidhuber, 1997), IndicBert (Kakwani et al., 2020), HindiBERT, $^{3}$ Hindi-Electra, $^{4}$ mBERT (Devlin et al., 2019), and XLM-R (Conneau et al., 2020); and LSTMs trained using two FL algorithms: FedProx (Li et al., 2018) and a modified version of CausalFedGSD (Francis et al., 2021). + +We show that LSTMs trained using FL perform competitively with more complex, centralised models in spite of only using up to $50\%$ of the data. + +# 2 Prior work + +Federated Learning Federated Learning (FL, McMahan et al., 2017) is a training procedure that distributes training of models onto a number of client devices. Each client device locally computes weight updates on the basis of local data, and transmits the updated weights to a central server. In this way, FL can help prevent computational bottlenecks when training models on a large corpus while simultaneously preserving privacy by not transmitting raw data. This training approach has previously been applied for on-device token prediction on mobile phones for English. In a study of the quality of mobile keyboard suggestions, Yang et al. (2018) show that FL improves the quality of suggested words. Addressing emoji-prediction + +in English, Ramaswamy et al. (2019) use the FederatedAveraging algorithm, to improve on traditional server-based models on user devices. We diverge from Ramaswamy et al. (2019) by using the CausalFedGSD and FedProx algorithms on Hindi tweets. FedProx develops on the FederatedAveraging algorithm by introducing a regularization constant to it (Li et al., 2018). In related work, Choudhary et al. (2018b) seek to address the question of FL for emoji prediction for low resource languages. However, the dataset that they use, Choudhary et al. (2018a) relies on emojis that are frequently used in English text and therefore may not be representative of emoji use in other, low resource languages. + +Centralised Training In efforts to extend emoji prediction, Ma et al. (2020) experiment with a BERT-based model on a new English dataset that includes a large set of emojis for multi label prediction. Addressing the issue of low resource languages, Choudhary et al. (2018b) train a bidirectional LSTM-based siamese network, jointly training their model with high resource and low resource languages. A number of studies on emoji prediction have been conducted in lower-resourced languages than English (e.g. Liebeskind and Liebeskind, 2019; Ronzano et al., 2018; Choudhary et al., 2018a; Barbieri et al., 2018a; Duarte et al., 2020; Tomihira et al., 2020). Common to these approaches is the use of centralised ML models, which increase risks of breaches of privacy. In our experiments, we study using FL for emoji topic classification in low resource settings. + +# 3 Data + +We collect our dataset for emoji topic prediction by scraping $\sim 1\mathrm{M}$ tweets. We only keep the 24,794 tweets that are written in Hindi and contain at least one emoji. We duplicate all tweets that contain multiple emojis by the number of emojis contained, assigning a single emoji to each copy, resulting in a dataset of 118,030 tweets with 700 unique emojis. Due to the imbalanced distribution of emojis in our dataset (see Figure 2), we assign emojis into 10 coarse-grained categories. This reduction i.e., from multi-label to multi-class and unique emojis into categories, risks losing the semantic meaning of emojis. Our decision is motivated by how challenging emoji prediction is without such reductions (Choudhary et al., 2018b). + +We pre-process our data to limit the risk of overfitting to rare tokens and platform specific tokens. + +![](images/5e842940e5338fa9763006474bcaf371c8a29788dfd46cf0d2852118192aea5a.jpg) +Figure 2: Distribution of 15 most frequently appearing emojis in our dataset. + +For instance, we lower-case all text and remove numbers, punctuation, and retweet markers. We replace mentions, URLs, and hashtags with specific tokens to avoid issues of over-fitting to these. + +# 3.1 Balancing data + +This dataset exhibits a long-tail in the distribution of emoji categories (see Figure 3), with the vast majority of tweets belonging to the "Smileys & Emotions" and "People & Body" categories. To address this issue, we use two different data balancing methods: re-sampling (He and Garcia, 2009) and cost-sensitive re-weighting (Khan et al., 2017). + +Re-Sampling Re-sampling has been used widely to address issues of class imbalances (e.g. Buda et al., 2018; Zou et al., 2018; Geifman and ElYaniv, 2017; Shen et al., 2016). We balance the training data by up-sampling the minority class (Drumnond, 2003) and down-sampling the majority class (Chawla et al., 2002), resulting in a balanced dataset of 94,420 tweets (9,442 documents per class). The validation and test sets are left unmodified to ensure a fair and realistic evaluation. + +Cost-Sensitive learning Another method for addressing data imbalances is cost-sensitive learning (see Zhou and Liu, 2005; Huang et al., 2016; Ting, 2000; Sarafianos et al., 2018). In this method, each class is assigned a weight which is used to weigh the loss function (Lin et al., 2017). For our models, we set the class weights to the inverse class frequencies. + +# 4 Experiments + +We conduct experiments with PyTorch (Paszke et al., 2019) and Transformers (Wolf et al., 2020) on + +Google Colab using a Nvidia Tesla V100 GPU with 26GB of RAM. We create train, validation, and test (80/10/10) splits of the dataset, and measure performances using precision, recall, and weighted F1. All models are trained and evaluated on the imbalanced data and the two data balancing methods (see §3.1). For the FL setting, we conduct experiments manipulating the independent and identically distributed (I.I.D.) data assumption on client nodes. + +# 4.1 Baseline models + +We use 6 centralised models as baselines for comparison with the federated approach. Specifically, we use a bi-LSTM (Hochreiter and Schmidhuber, 1997) with 2 hidden layers and dropout at 0.5; two multi-lingual models: mBERT (Devlin et al., 2019) and XLM-R (Conneau et al., 2020); and three models pre-trained on Indic languages: IndicBert (Kakwani et al., 2020), HindiBERT, and Hindi-Electra. + +# 4.2 Federated models + +For our federated learning experiments, we use the FedProx (Li et al., 2018) algorithm and a modification of the CausalFedGSD (Francis et al., 2021) algorithm. FedProx trains models by considering the dissimilarity between local gradients and adds a proximal term to the loss function to prevent divergence from non-I.I.D. data. CausalfedGSD reserves $30\%$ of the training data for initializing client nodes. When a client node is created, it receives a random sample of the reserved data for initialization. In our modification, we similarly reserve $30\%$ of the training data, however we diverge by using the full sample to initialize the global model, which is then transmitted to client nodes. This change means that (i) user data is transmitted fewer times; (ii) modellers retain control over the initialization of the model, e.g. to deal with class imbalances; and (iii) models converge faster, due to exposing client nodes to the distribution of all classes (see Appendix A.6). + +We reuse the Bi-LSTM (see Section 4.1) as our experimental model on client devices due to its relative low compute requirements. For our experiments, we set the number of clients to 100 and simulate I.I.D. and non-I.I.D. settings. We simulate an I.I.D. setting by ensuring that all client devices receive data that is representative of the entire dataset. For the non-I.I.D. setting, we create severely imbalanced data splits for clients by first grouping the data by label, then splitting the grouped data into 200 bins and randomly assigning + +
Bi-LSTMmBERTXLM-RIndicBERThindiBERTHindi-Electra
PrecisionRecallF1PrecisionRecallF1PrecisionRecallF1PrecisionRecallF1PrecisionRecallF1PrecisionRecallF1
Imbalanced64.7264.2663.8363.2566.9064.5068.7470.3969.4467.1568.2267.6065.3966.5365.9027.3452.2935.91
Re-sampled64.4255.4158.6162.1853.4356.5867.9260.7663.3968.0462.4464.5862.9555.1657.9264.4257.9360.30
Cost-Sensitive68.4162.2764.4663.9962.7363.3069.7968.3368.8769.5467.9868.6666.9765.3266.0627.3452.2935.91
+ +Table 1: Centralised model performances. + +
c = 10%c = 30%c = 50%
IIDnon-IIDIIDnon-IIDIIDnon-IID
PrecisionRecallF1PrecisionRecallF1PrecisionRecallF1PrecisionRecallF1PrecisionRecallF1PrecisionRecallF1
Imbalanced60.9466.1162.9961.0539.6825.8261.1066.0163.0457.8866.3961.6461.1166.9163.3556.9463.8257.06
Re-sampled60.8946.0150.8960.8322.5223.0460.5846.5851.2257.3835.8537.3760.7847.1451.6353.1636.8341.95
Cost-Sensitive60.4559.5059.5060.4741.2428.9260.9160.9960.4755.7656.7752.6961.5660.3960.4856.5163.4156.80
+ +Table 2: Results using the FedProx algorithm. c is the percentage of clients whose updates are considered. + +
c = 10%c = 30%c = 50%
PrecisionIID RecallF1Precisionnon-IID RecallF1PrecisionIID RecallF1Precisionnon-IID RecallF1PrecisionIID RecallF1Precisionnon-IID RecallF1
Imbalanced60.8965.0462.3458.0960.8456.9661.4066.0863.2257.6866.5161.2160.6966.3362.9057.9057.8447.85
Re-sampled62.1444.9550.2753.3134.0139.6962.4645.3950.6758.2127.6127.6161.8646.0051.1559.8422.5728.83
Cost-Sensitive61.6261.1560.9256.7265.2060.5162.1761.9761.6056.8665.5460.5960.5061.2860.4459.3360.5654.22
+ +Table 3: Results using the modified CausalFedGSD. c is the percentage of clients whose updates are considered. + +
c = 10%c = 30%c = 50%
PrecisionRecallF1PrecisionRecallF1PrecisionRecallF1
Imbalanced63.9564.1963.6764.2364.4463.9164.1664.2863.78
Re-sampled63.0751.1455.0862.8452.0455.7162.8451.7255.50
Cost-Sensitive66.7264.9665.3866.6664.8465.2766.7865.0865.47
+ +Table 4: Results for the baseline CausalFedGSD. $c$ is the client fraction per round. + +
ApproachCentralisedFederated
XLM-RFedProxModified CausalFedGSD
Imbalanced69.4463.3563.22
Re-sampled63.3951.6351.15
Cost-Sensitive68.8760.4861.60
+ +Table 5: An approach-wise comparison of F1 scores for best performing models in centralized and federated settings. + +2 bins to each client. We experiment with three different settings, in which we randomly select $10\%$ , $30\%$ , and $50\%$ of all clients whose updates are incorporated into the global model. + +# 4.3 Analysis + +Considering the results for our baseline models (see Table 1), we find that XLM-R and IndicBERT obtain the best performances. Further, using cost-sensitive weighting tends to out-perform resampling the dataset. In fact, the cost-sensitive weighting performs comparatively, or out-performs, other settings. Curiously, we see that Hindi Electra under-performs compared to all other models, including HindiBERT which is a smaller model trained on the same data. This discrepancy in the performances models may be due to differences in complexity, and thus data required to achieve competitive performances. Finally, the bi-LSTM slightly under-performs in comparison to XLM-R, however it performs competitively with all other well-performing models. + +Turning to the performance of the federated baselines (see Table 2), we find an expected performance of the models.6 Generally, we find that the federated models achieve comparative performances, that are slightly lower than the centralised systems. This is due to the privacy-performance trade-off, where the increased privacy offsets a small performance loss. Considering the F1-scores, we find that the optimal setting of the ratio of clients is subject to the data being I.I.D. In contrast, models trained on the re-sampled data tend to prefer data in an I.I.D. setting, but in general underperform in comparison with other weighting strategies, including the imbalanced sample. Using our modification of the CausalFedGSD algorithm, we show improvements over our FL baselines when the data is I.I.D. and variable performance for a non-I.I.D. setting (see Table 3). Comparing the results of the best performing settings, we find that the FL architectures perform comparably with the centralised models, in spite of being exposed to less data and preserving privacy of users (see Table 5). Table 4 refers to the results for the I.I.D. experiments of the baseline CausalFedGSD algorithm (Francis et al., 2021). We also observe a difference in optimization time for both models (see Appendix A.6). Models trained using our modification of CausalFedGSD converges faster than the original CausalFedGSD, which in turn converges much faster than FedProx. Moreover, we find indications that the original CausalFedGSD algorithm may be prone to over-fitting, as performance stagnates without fluctuations, while our modification shows fluctuations in performance that are similar to those of the FedProx models. + +# 5 Conclusion + +Emojis topic prediction in user-generated text is a task which can contain highly private data. It is therefore important to consider privacy-preserving methods for the task. Here, we presented a new dataset for the task for Hindi and compared a privacy preserving approach, Federated Learning, with the centralised server-trained method. We present a modification to the CausalFedGSD algorithm, and find that it converges faster than our other experimental models. In our experiments with different data balancing methods and simulations of I.I.D. and non-I.I.D. settings, we find that using FL affords comparable performances to the more complex fine-tuned language models that are trained centrally, while ensuring privacy. In future work, we plan to extend this work to multi-label emoji topic prediction and investigate strategies for dealing with decay of the model vocabulary. + +# Ethical considerations + +The primary reason for using federated learning is to ensure user-privacy. The approach can then stand in conflict with open and reproducible science, in terms of data sharing. We address this issue by making our dataset open to the public, given that researchers provide an Institutional Review Board (IRB) approval and a research statement that details the methods and goals of the research, where IRB processes are not implemented. For researchers who are at institutions without IRB processes, data will only be released given a research statement that also details potential harms to participants. The sharing of data will follow Twitter's developer agreement, which allows for 50k Tweet objects to be shared. We will further provide the code to our 24k tweets into the full dataset of 118k. + +Our modification of the CausalFedGSD model introduces the concern of some data being used to initialise the model. Here a concern can be that some data will be available globally. While this concern is justified, the use of FL affords two things: First, FL can limit on the overall amount of raw data that is transmitted and risks exposure. Second, initialisation can occur using synthetic data, created for the express purposes of model initialisation. Moreover, pre-existing public, or privately owned, datasets can be used to initialise models, which can be further trained given weight updates provided by the client nodes. Federated learning, and our approach to FL thus reduce the risks of ex + +posing sensitive information about users, although the method does not completely remove such risks. + +# References + +Hamza Alshenqeeti. 2016. Are emojis creating a new or old visual language for new generations? a sociosemiotic study. Advances in Language and Literary Studies, 7(6). +Francesco Barbieri, Jose Camacho-Collados, Francesco Ronzano, Luis Espinosa-Anke, Miguel Ballesteros, Valerio Basile, Viviana Patti, and Horacio Saggion. 2018a. SemEval 2018 task 2: Multilingual emoji prediction. In Proceedings of The 12th International Workshop on Semantic Evaluation, pages 24-33, New Orleans, Louisiana. Association for Computational Linguistics. +Francesco Barbieri, Luis Espinosa-Anke, Jose Camacho-Collados, Steven Schockaert, and Horacio Saggion. 2018b. Interpretable emoji prediction via label-wise attention LSTMs. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4766-4771, Brussels, Belgium. Association for Computational Linguistics. +Lukas Biewald. 2020. Experiment tracking with weights and biases. Software available from wandb.com. +Mateusz Buda, Atsuto Maki, and Maciej A Mazurowski. 2018. A systematic study of the class imbalance problem in convolutional neural networks. Neural Networks, 106:249-259. +Spencer Cappallo, Stacey Svetlichnaya, Pierre Garrigues, Thomas Mensink, and Cees GM Snoek. 2018. New modality: Emoji challenges in prediction, anticipation, and retrieval. *IEEE Transactions on Multimedia*, 21(2):402-415. +Nitesh V Chawla, Kevin W Bowyer, Lawrence O Hall, and W Philip Kegelmeyer. 2002. Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research, 16:321-357. +Nurendra Choudhary, Rajat Singh, Vijjini Anvesh Rao, and Manish Shrivastava. 2018a. Twitter corpus of resource-scarce languages for sentiment analysis and multilingual emoji prediction. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1570-1577, Santa Fe, New Mexico, USA. Association for Computational Linguistics. +Nurendra Choudhary, Rajat Singh, Ishita Bindlish, and Manish Shrivastava. 2018b. Contrastive learning of emoji-based representations for resource-poor languages. arXiv preprint arXiv:1804.01855. +Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco + +Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8440-8451, Online. Association for Computational Linguistics. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. +Chris Drummond. 2003. Class imbalance and cost sensitivity: Why undersampling beats oversampling. In ICML-KDD 2003 Workshop: Learning from Imbalanced Datasets, volume 3. +Luis Duarte, Luís Macedo, and Hugo Gonçalo Oliveira. 2020. Emoji prediction for portuguese. In International Conference on Computational Processing of the Portuguese Language, pages 174-183. Springer. +Bjarke Felbo, Alan Mislove, Anders Søgaard, Iyad Rahwan, and Sune Lehmann. 2017. Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1615-1625, Copenhagen, Denmark. Association for Computational Linguistics. +Sreya Francis, Irene Tenison, and Irina Rish. 2021. Towards causal federated learning for enhanced robustness and privacy. arXiv preprint arXiv:2104.06557. +Yonatan Geifman and Ran El-Yaniv. 2017. Deep active learning over the long tail. arXiv preprint arXiv:1711.00941. +Haibo He and Edwardo A Garcia. 2009. Learning from imbalanced data. IEEE Transactions on knowledge and data engineering, 21(9):1263-1284. +Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780. +Chen Huang, Yining Li, Chen Change Loy, and Xiaou Tang. 2016. Learning deep representation for imbalanced classification. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5375-5384. +Divyanshu Kakwani, Anoop Kunchukuttan, Satish Golla, Gokul N.C., Avik Bhattacharyya, Mitesh M. Khapra, and Pratyush Kumar. 2020. IndicNLPSuite: Monolingual corpora, evaluation benchmarks and pre-trained multilingual language models for Indian + +languages. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4948-4961, Online. Association for Computational Linguistics. +Salman H Khan, Munawar Hayat, Mohammed Bennamoun, Ferdous A Sohel, and Roberto Togneri. 2017. Cost-sensitive learning of deep feature representations from imbalanced data. IEEE transactions on neural networks and learning systems, 29(8):3573-3587. +Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, and Virginia Smith. 2018. Federated optimization in heterogeneous networks. arXiv preprint arXiv:1812.06127. +Chaya Liebeskind and Shmuel Liebeskind. 2019. Emoji prediction for hebrew political domain. In *Companion Proceedings of The 2019 World Wide Web Conference*, WWW '19, page 468-477, New York, NY, USA. Association for Computing Machinery. +Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dólar. 2017. Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, pages 2980-2988. +Weicheng Ma, Ruibo Liu, Lili Wang, and Soroush Vosoughi. 2020. Emoji prediction: Extensions and benchmarking. arXiv preprint arXiv:2007.07389. +Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. 2017. Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics, pages 1273-1282. PMLR. +Rebeca Padilla López and Fabienne Cap. 2017. Did you ever read about frogs drinking coffee? investigating the compositionality of multi-emoji expressions. In Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 113-117, Copenhagen, Denmark. Association for Computational Linguistics. +Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, and et al. 2019. PyTorch: An Imperative Style, High-Performance Deep Learning Library, page 8024-8035. Curran Associates, Inc. +Swaroop Ramaswamy, Rajiv Mathews, Kanishka Rao, and Françoise Beaufays. 2019. Federated learning for emoji prediction in a mobile keyboard. arXiv preprint arXiv:1906.04329. +Francesco Ronzano, Francesco Barbieri, Endang Wahyu Pamungkas, Viviana Patti, Francesca Chiusaroli, et al. 2018. Overview of the evalita 2018 Italian emoji prediction (itamoji) task. In 6th Evaluation Campaign of Natural Language + +Processing and Speech Tools for Italian. Final Workshop, EVALITA 2018, volume 2263, pages 1-9. CEUR-WS. + +Nikolaos Sarafianos, Xiang Xu, and Ioannis A Kakadiaris. 2018. Deep imbalanced attribute classification using visual attention aggregation. In Proceedings of the European Conference on Computer Vision (ECCV), pages 680-697. + +Li Shen, Zhouchen Lin, and Qingming Huang. 2016. Relay backpropagation for effective learning of deep convolutional neural networks. In European conference on computer vision, pages 467-482. Springer. + +Luke Stark and Kate Crawford. 2015. The conservatism of emoji: Work, affect, and communication. Social Media+ Society, 1(2):2056305115604853. + +Kai Ming Ting. 2000. A comparative study of cost-sensitive boosting algorithms. In *In Proceedings of the 17th International Conference on Machine Learning*. CiteSeer. + +Toshiki Tomihira, Atsushi Otsuka, Akihiro Yamashita, and Tetsuji Satoh. 2020. Multilingual emoji prediction using bert for sentiment analysis. International Journal of Web Information Systems. + +Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funpowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics. + +Timothy Yang, Galen Andrew, Hubert Eichner, Haicheng Sun, Wei Li, Nicholas Kong, Daniel Ramage, and Françoise Beaufays. 2018. Applied federated learning: Improving google keyboard query suggestions. arXiv preprint arXiv:1812.02903. + +Linrui Zhang, Yisheng Zhou, Tatiana Erekhinskaya, and Dan Moldovan. 2020. Emoji prediction: A transfer learning approach. In *Future of Information and Communication Conference*, pages 864–872. Springer. + +Zhi-Hua Zhou and Xu-Ying Liu. 2005. Training cost-sensitive neural networks with methods addressing the class imbalance problem. IEEE Transactions on knowledge and data engineering, 18(1):63-77. + +Yang Zou, Zhiding Yu, BVK Kumar, and Jinsong Wang. 2018. Unsupervised domain adaptation for semantic segmentation via class-balanced self-training. In Proceedings of the European conference on computer vision (ECCV), pages 289-305. + +# A Appendix + +# A.1 Data + +The tweets were collected using an "Elevated access" to the Twitter API v2. To collect tweets written in the Hindi language, we use "lang:hi" query. No other search criteria is used. The time-span of the tweets is from 19th April, 2021 to 8th May, 2021. Figure 4 shows a sample of tweets present in our Hindi dataset for the task of emoji prediction. + +![](images/4e698c2a80c10358b66b1f7cd5030cb796de8a454d72726c4b6e028a1058e097.jpg) +Figure 3: Category distribution of complete dataset + +# A.2 Server-Based Models + +For traditional server-side transformer models, we use the simple transformers library. We use the default configuration options and train all the transformer models for 25 epochs with a learning rate of 4e-5 and no weight decay or momentum. + +All baseline models (transformer-based and others) are trained with batch size 8, learning rate $4e - 5$ , and seq. length 128. + +All the models were trained using deterministic algorithms for randomness in PyTorch and are easily reproducible using the same seeds. + +# A.3 Experiments considering only text + +We run some additional experiments considering only the actually text without applying the token setup as described in §3 as reflected in Tables 6, 7, 8, and 9. Observing these results, we note that except for non-I.I.D. settings, we see negligible improvements from applying the extra tokens in the original dataset (see §3). + +
LangTextEmojiLabel
Hindiबिलाककूल सही कहा आयपों भातिPeople & Body
EnglishYou are absolutely right brother
Hindiधादर्षेशकूल सिहर्वप्रोम् अध्या तहारवालो अोर काँलोनी गाले नही जनाहिते मे जांरीSmileys & Emotion
EnglishRemember that only love is blind and not your family and the colony. Spreading the word in public interest
Hindiधु़ो किमलना धिोत्ता तिको नहाObjects
EnglishHow much we love you
Hindiधु़ो किमलना धिोत्ता तिको नहाAnimals & Nature
EnglishSome strange fragrance in the whispering winds of very emotional sweet words
Hindiधु़ो किमलना की आयते तिको नहाFood & Drink
EnglishBest wishes for your birthday
+ +Figure 4: Example of our Hindi dataset + +
c = 10%c = 30%c = 50%
PrecisionIID RecallF1Precisionnon-IID RecallF1PrecisionIID RecallF1Precisionnon-IID RecallF1PrecisionIID RecallF1Precisionnon-IID RecallF1
Imbalanced61.3364.6662.3257.7064.1057.9661.5567.6463.6058.0158.4254.8661.6566.8363.5758.3061.5958.09
Re-sampled61.4946.2251.1256.8430.0634.2860.6043.7549.1957.4835.3241.3660.8547.7152.1456.1341.2845.76
Cost-Sensitive62.1463.3561.9958.0865.8661.2563.7265.2563.7856.3957.7654.3660.3659.9959.5756.6863.2259.36
+ +Table 6: Results using the FedProx algorithm on the dataset as explained in A.3. + +
c = 10%c = 30%c = 50%
PrecisionIID RecallF1non-IIDPrecisionIID RecallF1non-IIDPrecisionIID RecallF1non-IID
PrecisionRecallF1PrecisionRecallF1PrecisionRecallF1
Imbalanced61.8367.2463.8758.9645.8838.3461.6267.1163.4158.9563.8060.5861.6667.3863.7059.4649.3943.88
Re-sampled59.4437.5343.6853.1049.9141.5059.5341.0646.5458.6126.6832.4560.9739.0245.4857.7032.9839.71
Cost-Sensitive60.8859.3859.4954.8257.4246.1760.4560.7159.9659.0566.5262.0960.4461.4160.3858.6963.6060.11
+ +Table 7: Results using the modified CausalFedGSD on the dataset as explained in A.3. + +
c = 10%c = 30%c = 50%
PrecisionRecallF1PrecisionRecallF1PrecisionRecallF1
Imbalanced62.9363.3762.6863.0163.4862.7363.2163.6062.90
Re-sampled60.8449.5953.3060.7249.4553.1160.4749.0852.76
Cost-Sensitive64.5663.7063.6564.6163.8863.7064.2763.4163.33
+ +Table 8: Results for the baseline CausalFedGSD on the dataset as explained in A.3. + +
ApproachCentralisedFederated
XLM-RFedProxModified CausalFedGSD
Imbalanced69.4463.6063.87
Re-sampled63.3952.1446.54
Cost-Sensitive68.8763.7862.09
+ +Table 9: An approach-wise comparison of F1 scores for best performing models in centralized and federated settings trained on the dataset as explained in A.3. + +# A.4 Federated Learning Plots + +This section provides detailed graphs comparing the training loss, validation AUC, validation F1 score and validation accuracy for every dataset variation. All of these graphs were made using Weights and Biases (Biewald, 2020). + +We set the value of the proximal term to 0.01 following Li et al. (2018). We set the learning rate as $1e - 02$ for Federated models based on hyperparameter sweeps. + +![](images/a179add21a3e29da03cd874e9b3a0b974c1e95b342bac2e78e26f583d9059063.jpg) +A.4.1 Imbalanced Dataset (IID) + +![](images/809e77780dbe401d9ee56dfc15641bf1b46ac9984a36a9b932994b0826028304.jpg) + +![](images/60254965a02836573d04198c2feff595788c7b1117436a0b35d4dc07c659601c.jpg) + +![](images/f95bee2db751cb37eba65b417874e67c8517c19e205d9fb76b09efdd4a4a18ae.jpg) +A.4.2 Imbalanced Dataset (non-IID) + +![](images/5c5c3178686ac87a98f6adf35e663debb5ffc1822bc5fbd09541a9831095adee.jpg) + +![](images/6cc4277ae476e70f3f106f4916643858fd66a17a2aa60c201d6c225943b8417f.jpg) + +![](images/94d97287be46a3d9197b511e5b2f7936a83eec468276b9e43446b248150e06a4.jpg) + +![](images/e7a1ee1ac9ab39bd59d9eb6d95804a242506e7c420496b889387e3a0563ea1f9.jpg) + +![](images/de32f2e4ca6809ccb3266cddf9269f6c3d196cee4ac3c48ee533d41be1cc9d85.jpg) +A.4.3 Balanced Dataset (IID) + +![](images/94830b791e4737b567e9fa9c3b5b941686a7fb4bed9f5fde4f078704957d008a.jpg) + +![](images/a5f7282ac17f8a388e9ed300785127678acff8f337f53950a84790491a766191.jpg) +A.4.4 Balanced Dataset (non-IID) + +![](images/8b91cc397e4c05482da007354a243bd9ace36ad870ac459552b455a1972c3832.jpg) + +![](images/65187850886bdabc9b5022c27b29169c64d4d95c49c4292902ef8ff75fff8889.jpg) + +![](images/f49082f6f2d672e04b78d20dd99bab11af53c4d28d8d6202bfe61a73e39d63a1.jpg) + +![](images/2b8ba2bb6eb7e4c786fac226a9d609108e48bf1785ba3221e1533918f3ea2afd.jpg) + +![](images/a4c520153ece0c816f2ff538704b477cf66f60b95d2084ea2a98ff5be274845f.jpg) +A.4.5 Cost Sensitive Approach (IID) + +![](images/ec136bf18ca8bde786bff29c073e47f11f54841104c3acd9e0a008789b24d193.jpg) + +![](images/039f58a902ee4bfd82870fb7379b9b071961ec900353eacf9f823fba85dc2ff1.jpg) + +![](images/14fbe0ce81fd22d2fc06e87f63170bb93ed44ad541908c423624a10a64c6f881.jpg) + +![](images/e593d5868f0207ea28feff9d9a3d80a0850be76aab635dd8c90e1e4de620a0b7.jpg) + +![](images/a73d7ae3cfafa2ae2bed5c3c7d4f405e89400b31bc4f367caa136fc5494f31a7.jpg) +A.4.6 Cost Sensitive Approach (non-IDD) + +![](images/bd7b02baabd0c4c4b48f5f0151806fc69cd65ad3600577b3d3fb6ea973232672.jpg) + +![](images/8bcee3f8fdad74423b8773e879a9e00070c43b0167f319c1ea1a8f71e6722931.jpg) + +# A.5 Time vs GPU Usage + +This section provides detailed graphs for GPU usage in Watts for every variation of experiments run. + +# A.5.1 Imbalanced Dataset + +![](images/041602cfacabe39d005d4caf72bab97b5e1a44180bf801648a822a0c8ef75499.jpg) + +# A.5.2 Balanced Dataset + +![](images/ffb297866cc07837bcd94001496eae0766ab768ddea2731e6707c10e3fafd497.jpg) + +![](images/4f4f298daddd975a768e7e59388677e9f0c469004d6e493d373b0ee4ac8d4eee.jpg) + +# A.5.3 Cost Sensitive Approach + +![](images/515da8ea32fc13308fb611284226e5bf04feefd4761fb6410dbe5908d14cd2de.jpg) + +# A.6 Performance Analysis of CausalFedGSD vs Modified CausalFedGSD + +We observe that when we run the original CausalFedGSD and our modification on the same hardware settings with the same number of parameters, the modified version finishes training in just 3.75 hours as opposed to the original CausalFedGSD implementation which takes around 47 hours to finish training. Figure 5 is a run comparison based on the heaviest variant for both algorithms. The highest runtime recorded for both was for the class weight dataset $c = 0.5$ variant. Similar performances are recorded for all other variants on all datasets. + +![](images/c8faf194d8a8206209fb71a832dde172c257e757ae133a2cf8e41e980a3c2dfb.jpg) +Figure 5: Comparison between optimization times for the baseline CausalFedGSD vs Modified CausalFedGSD \ No newline at end of file diff --git a/afederatedapproachtopredictingemojisinhinditweets/images.zip b/afederatedapproachtopredictingemojisinhinditweets/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..298d4bc5193853b1490a828b8fe2af9acc6a83f7 --- /dev/null +++ b/afederatedapproachtopredictingemojisinhinditweets/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3f20f5527ad40f18f16c19050de6bb8d658ef84187250736802aa658b831e48a +size 615385 diff --git a/afederatedapproachtopredictingemojisinhinditweets/layout.json b/afederatedapproachtopredictingemojisinhinditweets/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..b46411c07e716d27607051b797309a9be4c31ad3 --- /dev/null +++ b/afederatedapproachtopredictingemojisinhinditweets/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eaea6c3b785f83d59613a0df483b44ce4dcc51cb04d1c3c198e4d677d88d9873 +size 318343 diff --git a/affectiveidiosyncraticresponsestomusic/2d9a67f2-926b-4cc4-812d-21461e6e94df_content_list.json b/affectiveidiosyncraticresponsestomusic/2d9a67f2-926b-4cc4-812d-21461e6e94df_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..885cc9de9f9de232743e3fa74882e45515b0b42f --- /dev/null +++ b/affectiveidiosyncraticresponsestomusic/2d9a67f2-926b-4cc4-812d-21461e6e94df_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9bfdbce934fc627a6efd85f01f6e613fe32b910d3ba68965cb7b83c2abee618f +size 184052 diff --git a/affectiveidiosyncraticresponsestomusic/2d9a67f2-926b-4cc4-812d-21461e6e94df_model.json b/affectiveidiosyncraticresponsestomusic/2d9a67f2-926b-4cc4-812d-21461e6e94df_model.json new file mode 100644 index 0000000000000000000000000000000000000000..65c0da6c0e1cbcdc5980ed6abccb5b57b6311b26 --- /dev/null +++ b/affectiveidiosyncraticresponsestomusic/2d9a67f2-926b-4cc4-812d-21461e6e94df_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e178db0c88c9dae9aae035ebebcd00934acaf71aaf9107def38ace27915642c9 +size 215225 diff --git a/affectiveidiosyncraticresponsestomusic/2d9a67f2-926b-4cc4-812d-21461e6e94df_origin.pdf b/affectiveidiosyncraticresponsestomusic/2d9a67f2-926b-4cc4-812d-21461e6e94df_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..97306fe0435d73517d878cc4562c4a602c3fbb77 --- /dev/null +++ b/affectiveidiosyncraticresponsestomusic/2d9a67f2-926b-4cc4-812d-21461e6e94df_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0dc9c6426565ebb9b06e0b6c4aa27aef0bae0f0f673517f4a5f6f70dbf46dbc7 +size 3655683 diff --git a/affectiveidiosyncraticresponsestomusic/full.md b/affectiveidiosyncraticresponsestomusic/full.md new file mode 100644 index 0000000000000000000000000000000000000000..231b3a1636f6ac0957e5f03e9fd65af497c6616b --- /dev/null +++ b/affectiveidiosyncraticresponsestomusic/full.md @@ -0,0 +1,811 @@ +# Affective Idiosyncratic Responses to Music + +Sky CH-Wang $^{\circ}$ Evan Li $^{\circ}$ Oliver Li $^{\circ}$ Smaranda Muresan $^{\bullet\bullet}$ Zhou Yu $^{\circ}$ + +$^{\circ}$ Department of Computer Science, Columbia University + +$^{\text{a}}$ Data Science Institute, Columbia University + +skywang@cs.columbia.edu + +{el3078, al4143, smara, zy2461}@columbia.edu + +# Abstract + +Affective responses to music are highly personal. Despite consensus that idiosyncratic factors play a key role in regulating how listeners emotionally respond to music, precisely measuring the marginal effects of these variables has proved challenging. To address this gap, we develop computational methods to measure affective responses to music from over 403M listener comments on a Chinese social music platform. Building on studies from music psychology in systematic and quasi-causal analyses, we test for musical, lyrical, contextual, demographic, and mental health effects that drive listener affective responses. Finally, motivated by the social phenomenon known as 网抑云 (wang-yi-yún), we identify influencing factors of platform user self-disclosures, the social support they receive, and notable differences in discloser user activity. + +# 1 Introduction + +Music can evoke powerful emotions in listeners (Meyer, 1956). However, our emotional reactions to it are not universal—affective responses to music are highly personal. Just as you may wonder why your friend is sobbing to a song that you only feel ambivalent about, a listener's emotional response to music not only varies with inherent audio or lyrical features (Hevner, 1935; Webster and Weir, 2005; Van der Zwaag et al., 2011), but also with other factors such as a listener's demographics, mental health conditions, and surrounding environment (Krugman, 1943; Robazza et al., 1994; Gregory and Varney, 1996; Juslin and Västfjäll, 2008; Saarikallio et al., 2013; Garrido et al., 2018). As a result of this idiosyncrasy, it has been extremely difficult to precisely measure the marginal effects of these variables on a listener's affective response (Yang et al., 2007; Beveridge and Knox, 2018). This difficulty is further compounded especially when examining how a collection of these + +factors influence individual affective reactions in combination (Gómez-Canón et al., 2021). + +Music psychology has long focused on identifying the relationships between human affect and music, both in those that are perceived and those that are felt. Perceiving and feeling emotions in music, while highly related, are not identical (Hunter and Schellenberg, 2010). Examining the latter has proved challenging, as in addition to insufficient scale for finding significance, measuring felt emotions in participatory studies often interferes with the experience itself (Gabrielsson and Lindstrom, 2010). While recent computational studies have attempted citizen science approaches for annotation (Gutiérrez Páez et al., 2021), reliability remains an issue; annotator confusion persists between the concepts of perceived and induced emotions (Juslin, 2019). Our work expands this line of research by examining affective responses to music in a natural setting: an online social music platform. + +We test for differences in affective responses to music by computationally measuring expressed emotions from a massive study of over 403M listener comments on one of China's largest social music platforms, Netease Cloud Music. Our paper offers the following three contributions. First, we reveal several nuances in listener affective responses against a host of musical, lyrical, and contextual factors, showing evidence of emotional contagion. Second, in a multi-modal quasi-causal analysis, we show that listeners of different genders and ages vary in their reactions to musical stimuli and identify specific features driving demographic effects on affective responses. Third, motivated by the social phenomenon known as 网抑云, $^{1}$ we systematically study self-disclosures of mental health disorders on the platform, identifying driving fac + +tors of this behavior, the social support they receive, and differences in discloser user activity. + +# 2 Data + +Our work is drawn from one of the largest music streaming services in China, Netease Cloud Music, and focuses on Chinese-language user content. + +Netease Cloud Music. 网易云音乐 (wang-yi-yún-yin-yuè) has over 185 million monthly active users (Dredge, 2022). Unlike mainstream music streaming services in the United States such as Spotify and Apple Music, Netease Cloud Music is a social music platform (Zhou et al., 2018; Wang and Fu, 2020). Here, among other unique features, each song, album, and playlist have comment sections that serve as discussion boards, where users can post top-level comments as well as reply to or up-vote existing ones. These platform interactions serve as a natural setting on listener responses, where users are able to post freely² in the comment sections of what they are currently listening to. Users are required to create an account to access most of the platform's features; when doing so, users optionally input personal demographic information like age, gender, and location, which they can then choose to display as public or private. + +Dataset Collection. To collect a representative sample of public platform commenting activity, we adapt traditional snowball sampling (Atkinson and Flint, 2001) across multiple random seeds to build an exhaustive list of user, song, album, and playlist entity ids on the platform. We then uniformly sample from the set of entities that have at least one public comment posted. Data was collected from all public content ranging from the platform's inception, 2013, to 2022, totaling over 455K albums, 2.87M songs, 1.36M playlists, 29.9M users, and 403M comments. A detailed breakdown of our data and a view of the interaction interface of the platform are shown in Appendix Section A. The study and data collection were approved by an Institutional Review Board as exempt. + +# 3 Measuring Affective Response + +We measure affective responses to music as expressed in comments posted under their comment sections. Since not all comments are indicative of a user's emotional response, we sample a subset of user content and examine both the experiencer + +我只想和你一个人做那些浪漫到极致的事 +Translation: I just want to do the most romantic things with you alone + +果然不该来的。混蛋老爸,气死我了! + +Translation: I shouldn't have come. + +Asshole dad, pisses me off! + +太棒了好听太治愈了我莫名有点想哭 + +Translation: It's so good, it's so healing, I feel like crying for some reason + +Table 1: Example top-level comments indicating an affective response. + +of the emotion and its expressed stimulus, before conducting our analysis. + +Emotion Experiencer. Two annotators first manually annotated 1000 comments selected uniformly at random to identify the experience of the emotion expressed in top-level comments. Top-level comments were chosen to limit dyadic interaction effects and are used to measure affective responses later on. With an initial Cohen's $\kappa$ of 0.80 and with disagreements resolved via discussion, similar to (Mohammad et al., 2014), we find that the experiencer of the emotion expressed in the comment is often the commenter themselves (99.1%); we thus maintain this assumption in our later experiments. Selected examples and annotation guidelines are shown in Table 1 and Appendix Section B, respectively. + +Affective Stimulus. Next, annotators were tasked with identifying what caused the emotional response in the comment itself. Annotators labeled for comments containing emotions that could explicitly be said to not originate from music—under the BRECVEMA framework of music-induced emotions (Juslin, 2013), emotions are evoked in listeners via a combination of mechanisms related to aesthetic appreciation, entrainment, visual imagination, and emotional associations with past experiences, among other factors. A listener's emotional state also has an effect on their music choice; for example, listeners often use music for mood regulation, or as a coping mechanism (Stewart et al., 2019; Schäfer et al., 2020). Here, we make no explicit causal assumptions of music choice and seek only to measure comment affective responses. With an initial Cohen's $\kappa$ of 0.76 and with disagreements resolved via discussion, we only find a few instances $(3.3\%)$ where affective stimuli may be + +explicitly attributed elsewhere. There are a few patterns among these irrelevant comments: namely, that they primarily relate to album images, quotations, and easily identifiable spam messages, i.e. "沙发" (meaning "first comment"). Aiming for high precision, we create simple regular expressions and redundancy filtering to increase the relevance of comments with affective content, achieving a precision of $98.8\%$ on a held-out test set of the same size. Specific annotation guidelines and filtering methods are shown in Appendix Sections B and D, respectively. + +Measuring Affective Response. We characterize emotions across a 2-dimensional plane of valence and arousal following the Russell model of emotions (Russell, 1980), representing the degree of positivity and emotion intensity, respectively. Specifically, we employ a lexicon-based approach to measure valence and arousal in music comments, using one of the largest crowd-sourced datasets for the Chinese language—Chinese EmoBank (Yu et al., 2016)—containing 5512 words annotated for their valence and arousal. In the following sections, these measures of expressed emotions in comments are what we define as listener affective response. + +# 4 Variations in Affective Response + +Computing comment-level valence and arousal by averaging across word-level scores, we analyze variations in listener affective responses to (1) musical and (2) lyrical features, (3) contextual factors, and (4) user demographic variables. + +# 4.1 Musical and Lyrical Features + +In prior work, while much emphasis is placed on identifying the causes of perceived emotions in music, less emphasis has been placed on emotional responses, which are highly influenced by extramusical and contextual factors in listeners (Gómez-Cañón et al., 2021). Recent work has attempted to use physiological signals and self-reported emotions to measure emotional responses in listeners (Hu et al., 2018), though this has proved challenging partly due to a high degree of intercorrelations and confounds, causing the number of trials needed to measure such effects to be intractable relative + +to typical experiment scale (Eerola et al., 2013). Using our data, we test for the marginal effects of musical and lyrical features on affective responses. + +Methods. To understand the marginal contributions of these variables on affective responses, we fit separate multivariable linear regression models on response valence and arousal, including the features described below as regressors. As affective responses are highly idiosyncratic (Juslin and Västfjäll, 2008), we further control for listener demographics, namely age, gender, and location. We then test for multicollinearity by computing the variance inflation factor (VIF) for each variable and iteratively remove collinear variables in our regression that have a VIF greater than 5. In our analyses, we stratify continuous variables (i.e. tempo) into fixed-length category indicator variables (i.e., 80-90 BPM, 90-100 BPM, and so on) and measure the average marginal effects (AME) on valence and arousal of each stratum, using the first of such categories as the reference group (i.e., the AME of 90-100 BPM, and so on, relative to 80-90 BPM). + +Musical Features. We use librosa (McFee et al., 2015), pydub (Robert et al., 2018), and timbral_models of the AudioCommons project (Font et al., 2016) to derive song file musical features. We extract (1) tempo and (2) tempo standard deviation, both in beats per minute (BPM) (Ellis, 2007); (3) loudness, measured as the average decibels relative to full scale (dBFS) value of the entire song; (4) mode, namely, major or minor, and (5) key, i.e. C# minor (Krumhansl, 2001); as well as eight additional timbral features, or the perceived sound qualities of a piece of music. They are, specifically, (6) depth, related to the emphasis of low frequency content, (7) brightness, a measure that correlates both with the spectral centroid and the ratio of high frequencies to the sum of all energy of a sound, and (8) warmth, often created by low and midrange frequencies and associated with lower harmonics (Pearce et al., 2017); (9) roughness, a sound's buzzing, harsh, and raspy quality (Vassilikakis and Fitz, 2007); (10) sharpness, measuring high frequency content (Zwicker and Fastl, 2013); (11) hardness, the amount of aggression (Pearce et al., 2019); (12) reverberation, a sound or echo's persistence after it is initially produced (Jan and Wang, 2012); and (13) boominess, a sound's deep resonant quality as measured by the booming index (Hatano and Hashimoto, 2000). Here, reverberation is classed as a binary variable, while all other + +![](images/bc07dccee8e2c66ffd327786ca537500cca8ffea0156f5f01120a7b4e7bbf74c.jpg) +(a) Tempo + +![](images/0ae13339832bd66eb0cf55562a7233c639b5e548b16aa114226fcfd42703ff04.jpg) +(b) Loudness + +![](images/4c6df4c1e36ab6c29cbc8527f0415741cfbd3a3999662b601c1028b047ea7152.jpg) +(c) Mode + +![](images/66217c9ee95172032d3ff5a1f058b4e98dfffb8657c6c1b81e0bacd296cb1ca5.jpg) +(d) Timbre-Brightness + +![](images/e2e329aba11c10d56d1e0a926f83476eda21b4c5f9782c056d58c1d577da9a80.jpg) +(e) Positive Emotion + +![](images/37e30c87b5e2bb73222d5db990d8c61fa79e47784851e79ec9ab1a9c6592ae46.jpg) +(f) 1st Person Singular Pronouns +Figure 1: Average marginal effects of musical and lyrical features on listener affective responses with other features and listener demographics as controls. Standard errors are shown; valence in red, arousal in blue. The complete set of figures for musical and lyrical features are shown in Appendix Section E, Figures 13-18. + +timbral features are measured and clipped to values between 0 and 100 following their regression models' training data domain considerations. + +Psycholinguistic Lyrical Features. Similar to Mihalcea and Strapparava (2012) while limiting our analysis to Chinese language songs, we extract coarse psycholinguistic lexical features of lyrics. Specifically, we preprocess lyrics with regular expressions to remove extraneous information as shown in Appendix Section C and use the Simplified Chinese version (Huang et al., 2012) of the Linguistic Inquiry and Word Count (LIWC) lexicon (Pennebaker et al., 2015) to create normalized counts of tokenized word semantic classes. + +Topic-wise Lyrical Features. To capture thematic trends across song lyrics, we train a 20-topic LDA model on preprocessed song lyrics and manually label each topic with its prominent theme, i.e. Nationalism/China or Hometown/Childhood. Labeled lyrical topics and the top words associated with each are shown in Appendix Section C. + +Results. Figure 1 reveals five important trends in affective responses across a variety of musical and lyrical features. + +First, tempo exhibits a bimodal distribution relative to both valence and arousal; listeners are most intensely positive for tempos of around 110 BPM and 160 BPM, with the former eliciting greater arousal. Higher tempo variation also sees similar + +increases in affective responses, although tempo standard deviations of around 35-40 BPM produce the opposite effect, with arousal peaking earlier than valence. Our findings are consistent with prior work on listener self-ratings and measured physiological responses that have used coarse categorizations of tempo, i.e. "fast" tempo (Liu et al., 2018), or the presence and absence of tempo variation (Kamenetsky et al., 1997), as opposed to the continuous measures we use here. + +Second, consistent with prior work (Schubert, 2004; Gomez and Danuser, 2007), loudness generally produces a strong positive correlation with more intensely positive reactions; changes in loudness also see a greater change in AME than that of tempo. However, this trend is reversed for songs that are loudest (i.e. between -5 and 0 dBFS)—while unexplored in prior work within music psychology, this observation intuitively follows neural downregulation responses to excessively loud or unpleasant sounds (Hirano et al., 2006; Koelsch, 2014). + +Third, consistent across all keys (Appendix Figure 14), major mode in songs has a greater valence and a lower arousal than minor mode. This observation extends prior work investigating the interaction effects between mode and affective responses (Van der Zwaag et al., 2011) in a western tonal context, suggesting that associations of sadness + +and happiness by way of musical mode are also consistent in Chinese listeners. + +Fourth, increases in most timbral characteristics see similar increases in the intensity and positivity of reactions up until a point of extremity, whereafter the opposite effect is observed. The only exceptions are roughness and warmth, in which both valence and arousal see monotonically decreasing and increasing trends, respectively. Our results for brightness specifically provide nuance into how, when exploring an expanded set of timbral characteristics and moving beyond only varying timbre through different instruments (Hailstone et al., 2009; Eerola et al., 2012; Wallmark et al., 2018), excess of a timbral feature can produce the opposite initial effect on affective responses. + +Fifth, listener affective reactions mirror the psychological states expressed in lyrics. Changes in response valence and arousal closely match the proportion of LIWC category terms for affective processes. Greater use of positive emotion terms sees greater response positivity $(r = 0.93, p < 0.05)$ , while the opposite is true for negative emotion terms $(r = -0.92, p < 0.05)$ , and both saw rises in response intensity with their increased use. Furthermore, increases in first-person pronouns also see decreases in valence $(r = -0.94, p < 0.05)$ , mirroring work on the depressed psychological states reflected through their increased use (Pennebaker, 2011). Interpreted together with our findings on musical features, these observations mirror emotional contagion (Juslin, 2013), where the recognition of emotions expressed in music evokes similar emotions in listeners. + +These findings, compared to prior work, highlight the importance of using finer-grained measurements on an extended set of features and controls to provide a more nuanced analysis of emotional responses to musical and lyrical stimuli. Expanded results with the full list of figures are shown in Appendix Figures 13-18. + +# 4.2 Contextual Factors + +Extramusical factors such as listening context (e.g., listening to music when grabbing coffee vs. when exercising) also influence the emotional effects of music (Sloboda and O'Neill, 2001; Greasley and Lamont, 2011; Vuoskoski and Eerola, 2015). Prior work has primarily utilized experience sampling methods (Csikszentmihalyi and LeFevre, 1989) to study musical experiences in everyday contexts— + +![](images/658bcb5c7da63c8779824936b4e172d6210c264b991dd08068186e562526cbfa.jpg) + +![](images/13549530745251b4fa1857b82d131133919cb283ded49c8a0a5baa7f8a8cea98.jpg) + +![](images/cfa281c3cd67fa058e840e4210c4b0cb67e86240deae075245df2704fdffa596.jpg) +Figure 2: Averaged marginal effects of contextual choices in emotion-tagged (top) and setting-tagged (bottom) playlists on listener affective responses; standard errors are shown. + +![](images/da69bfdd3ba3ebf9444840c0bc59cd2ef8db180e409c637a3d758c6c9c08dc9a.jpg) + +where participants are polled at random intervals during the day—though generalizations to the population at large have proved difficult with small sample sizes (Sloboda et al., 2001; Juslin et al., 2008). While we are unable to obtain information about the physical setting a user was in (i.e. that a user was exercising when listening to a song), here, using our data on playlists and treating the choices of users in listening to playlists of specific types as context, we tease out the marginal effects that these choices have on affective responses. + +Choice as Context. We obtain context variables on 1.36M playlists through their tags, used by creators to label individual playlists. Tags consist of a set of physical setting (e.g., afternoon tea), emotional (e.g., nostalgic), and thematic (e.g., video game music) categories, in addition to language (e.g., Chinese) and stylistic (e.g., jazz) labels. As users primarily discover new playlists within the platform by browsing specific tags, we treat these tags as implicit signals of choice with these listening contexts, aiming to identify those that may differ on the emotional responses produced—i.e. that a user chose to listen to an exercise tagged playlist rather than an afternoon tea tagged one—noting that we do not make explicit causal assumptions behind the factors that led to these user choices. + +Methods. To identify the marginal effects of contextual choices on affective responses, we fit separate mixed-effect multivariable linear regression models on response valence and arousal, includ + +![](images/e0ef164df88bcdd9a4c855f89782ab30ac60b33567b0a55a99cb68b8e2df6634.jpg) +(a) Tempo, Women/Men + +![](images/46f52e37f100d44b89b0c07067dcff0784adc982a0b7c25006500b9feb351bfb.jpg) +(b) Loudness, Women/Men + +![](images/dd4de7c174b572613e89bd916b7bf17442952a9b63d97ce8116fced5516430f8.jpg) +(c) Mode, Women/Men + +![](images/897cfb5da71c43ea859072248592a7fc083509d3ec469af9134dc1de7435aebf.jpg) +(d) Negative Emotion, Women/Men + +![](images/3d9bad75982acd6e4d64c22ea0a2a2a73790126af1eedc2d5d513195ab696626.jpg) +(e) Hardness, b.i.t. 90s/b.i.t. 00s + +![](images/625902b0ca9271bdf2da0dfa5636af807e74aef973c579f07f80b0d3fb3ca9e6.jpg) +(f) Mode, b.i.t. 90s/b.i.t. 00s +Figure 3: Relative average treatment effects of gender (women/men) and age-based (born in the (b.i.t.) 1990s/b.i.t. 2000s) demographic groups on listener response valence and arousal against musical and lyrical features. Standard errors are shown; valence in red, arousal in blue. The complete set of figures for musical and lyrical features are shown in Appendix Section E, Figures 24-26. + +ing tagged category indicator variables as features and control for listener demographics. To further control for differences between playlist songs, we include them as random effects; for computational tractability, we include only random effects for songs that are labeled with 10 or more unique tags. Results. Cultivating affect is a driving reason behind why users create playlists (DeNora, 2000; Siles et al., 2019), and our results point to how playlists created by users are also generally successful at cultivating these affects among the general user population as well. As shown in Figure 2, playlists tagged by leisurely activity categories corresponded to the highest positivity in responses, and are consistent with prior work on stress levels in everyday situations (Vastfjäll et al., 2012), while arousal trends mirror diurnal shifts in emotion and physical activity (Golder and Macy, 2011). Expanded results for all tagged categories are shown in Appendix Section E, Figures 19-23. + +# 4.3 Demographic Variations + +Individuality is a driving factor in how listeners experience musically-evoked emotions (Yang et al., 2007; Juslin and Västfjäll, 2008; Gómez-Canón et al., 2021). However, measuring how individual differences affect emotional responses to music has proved challenging, with many researchers citing the insufficiency of typical experiment scale as a + +primary reason (Juslin et al., 2008; Lundqvist et al., 2009; Cameron et al., 2013), especially in the presence of confounders. For listener demographics, prior work has seen conflicting observations of how demographic effects modulate affective responses against musical features. For example, some observe that age and gender modulate emotional responses against tempo, mode, volume, and pitch (Webster and Weir, 2005; Chen et al., 2020), while others report the absence of such demographic effects or even contrasting observations (Robazza et al., 1994; Cameron et al., 2013). These contrary results might be due to variable experimental setups between studies, wherein the method of measurement will often interfere with the experience itself (Gabrielsson and Lindstrom, 2010). This raises the importance of studying emotional reactions in a natural setting when analyzing affective responses to music in everyday situations. Here, we test for demographic differences in affective responses in relation to song features using our data. + +Demographic Variables. Our analysis focuses on two main demographic variables, namely listener gender and age. We operate within the constraints of platform-provided choices in user + +registration for our variable categories and use only publicly displayed user data in our analysis. + +Methods. To test for differences between pairs of demographic groups in their affective responses to musical and lyrical features, we formalize alternations between groups as treatments and compute average treatment effects (ATE). In order to account for covariates and reduce bias due to confounding variables, we construct a multi-modal stratified propensity score matching (PSM) model as a quasi-causal analysis of demographic effects. Here, we formalize comments as subjects; the propensity score, defined traditionally as the likelihood of being assigned to a treatment group based on observed characteristics of the subject (Rosenbaum and Rubin, 1983), is thus a scaled estimate of the likelihood of a commenter being of a demographic group $g_i$ given a set of song features $f_i$ , or $P(g_i | f_i)$ . We estimate this probability—the propensity score—via logistic regression on a song's musical and lyrical features, and match data points within stratified deciles of this score to mitigate confounding bias (Rosenbaum and Rubin, 1984; Paul, 2017). Within these matched and stratified deciles, we fit separate linear regression models on response valence and arousal against specific song features, weighting and pooling stratum-specific estimated treatment effects to estimate the ATE (Imbens, 2004) and its variance (Lunceford and Davidian, 2004). Consistent with prior work in musical emotions (Kamenetsky et al., 1997) and in social psychology on how cultural constructions of gender may account for differences in emotional display (Bem, 1974), we observe that response valence and arousal by demographic groups differ in their distributions—for example, as shown in Appendix Section A.7, comments made by women are on average higher in both valence and arousal than those made by men. Therefore, we test specifically for standardized change in affective responses across song features within demographic groups. Finally, as in Section 4.1, we stratify continuous variables in our analyses into fixed-length categories and estimate the ATE of each stratum. + +Results. Shown in Figure 3, we find that listener age and gender both modulated affective responses to statistically significant degrees across a series of musical and lyrical features. Compared to men, women had more intensely positive affective reactions for songs that were louder (>12 dBFS), of lower tempo (<120 BPM), of minimal tempo stan + +dard deviation (<5), of minor mode, and that had reverb; though gender differences often diminished (i.e. tempo $>160$ BPM) or became statistically insignificant at feature extremities. Lyrically, women were affected more negatively with a greater proportion of negemo terms, while men were affected more positively for posemo terms, consistent with observed gendered responses in other mediums (Bradley et al., 2001; Fernandez et al., 2012). Compared to women, men had more intensely positive affective responses to darker, flatter, softer, smoother, and warmer timbral characteristics. Age effects were much less pronounced, with those born in the 2000s reacting more positively to harder and rougher timbral features as well as minor modes than those born in the 1990s. Full results are shown in Appendix Section E, Figures 24 to 26. + +Affective experiences are central reasons for music consumption (DeNora, 1999), though music choices often misalign with intended well-being outcomes (Stewart et al., 2019). We hope our work further facilitates more effective and more intentional music choices in daily consumption to achieve these desired results. Under appraisal theory, affective responses are learned and conditioned through individual lived experiences rather than innate to certain biological factors (Brody and Hall, 2010). These findings on demographic effects should then be interpreted to be products of the social norms, values, and lived experiences (de Boise, 2016) of those who may self-identify under the broad demographic groups in question; with the platform, song, and comment board being part of the context in which these emotions are deployed. + +# 5 Disclosures of Mental Health Disorders + +In the context of social media, given the frequent benefits of anonymity (De Choudhury and De, 2014) and social connectedness (Bazarova and Choi, 2014), self-disclosures of personal details can be a method to find social support, advice, and belonging (Ernala et al., 2018; Yang et al., 2019). This phenomenon gives life to "网抑云," which refers to the outpour of emotional and personal comments on the social music platform, especially late at night and under sad songs. While known colloquially and in popular culture, the mechanisms behind self-disclosure phenomena in the context of social music platforms are not well understood. + +Motivated to better understand disclosures of mental health disorders in a musically-situated social environment, we frame them as affective responses (Ho et al., 2018) and test for factors driving this behavior, the social support they receive, and differences in discloser user activity. Addressing these unknowns will help us understand how users may use social music platforms for therapeutic purposes (Schriewer and Bulaj, 2016) and guide us to better support vulnerable and at-risk individuals. + +Dataset Collection. In the absence of clinically-aligned user data (Harrigian et al., 2020), we source disorder terms from the DSM-5-TR $^7$ (American Psychiatric Association, 2022) and utilize regular expressions to identify disclosures of self-reported statements of diagnosis (Coppersmith et al., 2014, 2015; Cohan et al., 2018) for mental health disorders in music comments. Two Chinese native speakers further manually filter for genuine statements of disclosure (i.e., excluding jokes, quotes, and clearly disingenuous statements), resulting in 1133 users with self-reported mental health disorders. We find that, out of all disclosers, most disclose depression $(81.2\%)$ , anxiety $(19.9\%)$ , and bipolar $(18.5\%)$ disorders; additionally, most users $(60.6\%)$ self-identify as women, consistent with constituent gender differences of affective disorders in national studies (Huang et al., 2019). Disclosers show greater platform usage (Kolmogorov-Smirnov on user levels, $p < 0.01$ ), insomnia-aligned diurnal user activity consistent with disorder symptoms (Taylor et al., 2005; Harvey, 2008), increased engagement with playlists of sadder natures, e.g., as shown in Figure 4, loneliness $(+302\%)$ , sadness $(+158\%)$ , and night $(+50.1\%)$ , and decreased engagement with playlists of more active natures, e.g., exercise $(-51.7\%)$ , compared to typical users. These observations mirror affective disorder activity trends (Cooney et al., 2013) and suggest that people with affective disorders are more likely to use music reflective of negative emotions than positive emotions to manage feelings of sadness and depression (Stewart et al., 2019). A detailed breakdown of our data, comorbidities, and our specific regular expressions are described in Appendix Section F. + +Affective Response. Treating the act of self-disclosure as an affective response, we test for factors driving this behavior. Statements of self + +![](images/8937f3646d883dfba36671a4a4176b17cfb48d4d192853e210eb380c97baace9.jpg) + +![](images/28ec8dd1ec23070bf516122506985dafd24f71d8787646bff1e070e32ffe4652.jpg) +Figure 4: Relative tagged playlist commenting activity between disclosers and the set of all users on emotion (top) and setting (bottom) tagged playlists. Note that as each playlist may have up to three unique tags, relative tag percentages do not add up to $100\%$ . The complete set of figures for all playlist tag categories are shown in Appendix Section F, Figure 6. + +disclosure are more likely to appear as top-level comments (78.1%) than as replies (21.9%); top-level disclosures are biased towards songs with features generally associated with sadness (Juslin and Laukka, 2004) $(p < 0.01$ for all features other than tempo, Kolmogorov-Smirnov), i.e. softer songs with minor modes, and towards playlists with tags of the same nature, while disclosures in reply occur to comments that indicate emotional distress, or that are themselves replies to existing comments made by the discloser ("怎么了", meaning "what's wrong"). Responses to the first are split in function, with disclosers either expressing their diagnosis in empathy for encouragement ("...我一年前也确诊了,事情会好起来的", meaning "...I was diagnosed a year ago, things will be better"), or to commiserate ("我也确诊了,活着好难...", meaning "I was diagnosed too, living is so hard..."), showing evidence of resonance (Miller, 2015; Rosa, 2019) and high person-centered condolence (High and Dillard, 2012). + +Social Support. Characterizing audience engagement around self-disclosure comments in their content, we identify supportive comments according to the four major classes of social support around health concerns—prescriptive, informational, instrumental, and emotional support—from established literature (Turner et al., 1983; George et al., 1989; De Choudhury and De, 2014) and label the main type of support each comment falls under. We then fit logistic regression models on the dependent variable of receipt, aiming to identify where users are more likely to receive a supportive comment in response to disclosure; including song features as independent variables and song popularity, comment length, user demographics, and comment LDA topic distributions as controls. We observe that emotional \((52\%, e.g., “加油, 事情一定会好起来的我保证”)\) meaning “good luck, everything will be better I promise”) and prescriptive support \((31\%, e.g., “听一些令人振奋的歌曲吧”)”,meaning “listen to heart raising songs”) largely exceeds informational \((9\%, e.g., “…治疗可能会有帮助,两年治疗后我……”), meaning “…therapy could help, after two years of therapy I……)” and instrumental \((8\%, e.g., “…你可以私聊我”)”,meaning “…you can private message me”) forms in response to disclosures. Several psycholinguistic lyrical features proved statistically significant \((p<0.05)\) in predicting if a disclosure comment to a song would receive a supportive reply; the rate of terms in lyrics relating to social processes, specifically friend \((+2.23)\) and ingest \((+0.97)\), positively predict this prosocial behavior, and negative emotion terms \((-2.04)\) do so negatively, mirroring negative correlations between sadness and prosocial tendencies (Ye et al., 2020). For musical features, only reverberation did so positively \((+0.90)\). While past work has studied the prosocial effects of music, most have only used a limited set of author-chosen songs (Greitemeyer, 2009; Kennedy, 2013) or crowd-sourced prosocial perceptions (Ruth, 2017); here, we specifically identify what makes for prosocial songs and situate our study in the context of social support to mental health self-disclosures. Taken together, these observations not only provide ample pointers for music therapists on musical and dyadic conversational means for more successful emotion-focused interventions (Jensen, 2001) but also guide users on how to effectively find social support on the platform when needed (De Choudhury and Kiciman, 2017). + +# 6 Discussion and Future Work + +In this work, we sought to examine the driving factors behind variations in emotional reactions to music, via a large-scale computational study of a Chinese social music platform. Our analyses here reveal several nuances in how idiosyncratic variables elicit emotional responses, with a degree of precision that prior studies have often lacked thus far. In a case study of mental health self-disclosures in music comments, we characterized a type of discourse in the context of a popular social phenomenon, demonstrated the importance of posting location in determining the social support disclosures would receive, and revealed several factors driving the prosociality of music in this context. We see our present work situated in the broader context of studying emotionality in music and in the design of platforms to promote healthier interactions more centered on user well-being. Here, we highlight a few limitations and directions for future work; models, code, and anonymized data are made available at https://github.com/skychwang/music-emotions. + +The music we listen to has a strong effect on our moods (McCraty et al., 1998). The integration of emotional response analysis into music recommendation systems could promote healthier recommendations (Konstan and Riedl, 2012; Singh et al., 2020) more cognizant of listener well-being outcomes. No one size fits all, and more sophisticated analyses could better capture more factors that explain emotional response variations towards creating more personalized music emotion recommendation systems. + +While our work measures the effects of demographic variables on emotional responses, there remains a bio-psycho-social question on identifying the causes behind why this variation exists as it relates to song features. Lived experiences condition our emotions (Brody, 1997); future work could aim, through significant theoretical and qualitative study, to better identify the relationships and causes behind these variation outcomes. + +Several open questions also remain as to whether risk may be qualified in this context in relation to well-being. Specifically, it would be interesting to study how recommendation interactions may disproportionately affect those afflicted with mental health disorders, and how we may design platforms, in the context of well-being outcomes, under normative goals of equity and distributive justice (Rawls, 2001). + +# Ethical Considerations + +Data Release. For user comments, taking user privacy considerations into account, we release the set of comment ids used in our analyses—which researchers are able to use in conjunction with the Netease API to obtain original comment content—mirroring Twitter data release guidelines for academic research. + +Identity Affiliation. In studying demographic effects, we examine only the aggregate behavior of users who make public their demographic self-identification choices during registration under platform constraints. In particular, we note that platform choices for gender are limited only to binary options—男 men and 女 women. These choices should not be interpreted to have taken into account gender fluidity considerations or the multidimensional spectrum of gender identities (Larson, 2017). + +# Limitations + +Measuring Affective Response. In particular, we mirror the concerns by Mohammad (2020); notably, that (1) emotion lexicons are limited in coverage and do not include all possible terms in a language, and that (2) as languages and, in particular, our perceptions of words in them are by nature entities of change that inherently possess socio-cultural variations, emotion scores for words are not immutable, neither longitudinally nor socio-culturally. As such, while we have attempted to mitigate for this limitation by (1) choosing the largest Chinese emotion lexicon annotated for words sourced from the domain of social media and (2) comparing our findings to that of previous smaller-scale in-person studies that use varying methods to measure emotion when possible—even as no "gold standard" measure of emotional response exists, physiological, behavioral, or otherwise (Mauss and Robinson, 2009)—we encourage future work to further examine these phenomena in a greater variety of contexts. Further, our study does not make explicit causal claims around factors of music choice and user predisposition, i.e. what caused users to choose to listen to a specific song, or what their states of mind were prior to making this choice. While our work shows evidence of variations in affective responses correlated with musical, lyrical, demographic, and mental health factors, like the quasi-causal results estimating demographic effects on listener affective responses, we do not argue that these alone explain the entirety of the associated variations. In moving + +towards truly causal studies (Feder et al., 2021), we encourage further direct participatory work to further examine these observations in larger, more controlled, and even cross-cultural contexts. + +Censorship and Moderation. Users are able to report comments that violate platform rules, $^{8}$ and active moderation of user content exists on the platform. As we use only public posts on the platform, it is thus important to interpret our findings in the context of internet censorship in China (Vuori and Paltemaa, 2015). In particular, as noted by previous studies on mental health postings in Chinese social media (Cui et al., 2022), comments that go against certain government objectives—such as the "stability and unity for a harmonious society" (Wang, 2012), which mental health-related postings may go against—are often censored (Paltemaa et al., 2020). While pilot tests matching regular expressions on such phrases within platform comments still yielded significant quantities, the degree of censorship that these types of comments receive remains unclear. + +Statements of Diagnosis. As we study users with self-reported statements of diagnosis, our method only potentially captures a sub-population of each disorder—those who choose to disclose a diagnosis on a public platform under the option of anonymity. While we have attempted to increase the precision of identifying individuals who are diagnosed with specific disorders through significant manual annotation, in the lack of clinically-aligned user data, we nonetheless are unable to verify if genuine-appearing disclosures of mental health disorder diagnoses are ultimately truthful. However, as noted by (Coppersmith et al., 2014), given the stigmas often associated with mental illnesses, it seems unlikely that users would disclose that they were diagnosed with a condition they do not possess. Individuals who may be diagnosed with affective disorders undoubtedly also remain in the set of all users that we compare disclosers against and, as such, our results on platform user activity differences should only be interpreted in the context of discovering broad themes—not as ground truths of comparisons between those who are diagnosed and those who aren't. Finally, we also note concerns in clinical psychology on the heterogeneity of psychiatric diagnoses, which remains contentious in current literature. Notably, that standards of diagnosis all use different decision-making + +rules, that significant overlaps exist in symptoms between diagnoses, and that they may instead mask the complex underlying causes of human distress with potentially scientifically meaningless labels (Allsopp et al., 2019). + +# Acknowledgements + +We thank David Jurgens, Michelle Cohn, Xiang Zhou, Maximillian Chen, Kexin Fan, and the anonymous reviewers for their helpful comments, thoughts, and discussions. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-2036197. + +# References + +Kate Allsopp, John Read, Rhiannon Corcoran, and Peter Kinderman. 2019. Heterogeneity in psychiatric diagnostic classification. *Psychiatry research*, 279:15-22. +American Psychiatric Association. 2022. Diagnostic and statistical manual of mental disorders, fifth edition, text revision. American psychiatric association Washington, DC. +Rowland Atkinson and John Flint. 2001. Accessing hidden and hard-to-reach populations: Snowball research strategies. Social research update, 33(1):1-4. +Syeda Beenish Bareeqa, Syed Ijlal Ahmed, Syeda Sana Samar, Waqas Yasin, Sani Zehra, George M Monese, and Robert V Gouthro. 2021. Prevalence of depression, anxiety and stress in china during covid-19 pandemic: A systematic review with meta-analysis. The International Journal of Psychiatry in Medicine, 56(4):210-227. +Natalya N Bazarova and Yoon Hyung Choi. 2014. Self-disclosure in social media: Extending the functional approach to disclosure motivations and characteristics on social network sites. Journal of Communication, 64(4):635-657. +Sandra L Bem. 1974. The measurement of psychological androgyny. Journal of consulting and clinical psychology, 42(2):155. +Scott Beveridge and Don Knox. 2018. Popular music and the role of vocal melody in perceived emotion. Psychology of Music, 46(3):411-423. +Margaret M Bradley, Maurizio Codispoti, Dean Sabatinelli, and Peter J Lang. 2001. Emotion and motivation ii: sex differences in picture processing. Emotion, 1(3):300. +Leslie R Brody. 1997. Gender and emotion: Beyond stereotypes. Journal of Social issues, 53(2):369-393. + +Leslie R Brody and Judith A Hall. 2010. Gender, emotion, and socialization. In Handbook of gender research in psychology, pages 429-454. Springer. +Michaèle Cameron, Julie Baker, and Mark Peterson. 2013. Waiting for service: The effects of music volume and gender. Services Marketing Quarterly, 34(4):257-273. +Xuqian Chen, Shengqiao Huang, Xueting Hei, and Hongyuan Zeng. 2020. Felt emotion elicited by music: are sensitivities to various musical features different for young children and young adults? The Spanish Journal of Psychology, 23. +Arman Cohan, Bart Desmet, Andrew Yates, Luca Soldaini, Sean MacAvaney, and Nazli Goharian. 2018. Smhd: a large-scale resource for exploring online language usage for multiple mental health conditions. In 27th International Conference on Computational Linguistics, pages 1485-1497. ACL. +Gary M Cooney, Kerry Dwan, Carolyn A Greig, Debbie A Lawlor, Jane Rimer, Fiona R Waugh, Marion McMurdo, and Gillian E Mead. 2013. Exercise for depression. Cochrane database of systematic reviews, (9). +Glen Coppersmith, Mark Dredze, and Craig Harman. 2014. Quantifying mental health signals in twitter. In Proceedings of the workshop on computational linguistics and clinical psychology: From linguistic signal to clinical reality, pages 51-60. +Glen Coppersmith, Mark Dredze, Craig Harman, and Kristy Hollingshead. 2015. From adulthood to sad: Analyzing the language of mental health on twitter through self-reported diagnoses. In Proceedings of the 2nd workshop on computational linguistics and clinical psychology: from linguistic signal to clinical reality, pages 1-10. +Mihaly Csikszentmihalyi and Judith LeFevre. 1989. Optimal experience in work and leisure. Journal of personality and social psychology, 56(5):815. +Jesse Cui, Tingdan Zhang, Dandan Pang, Kokil Jaidka, Garrick Sherman, Vinit Jakhetiya, Lyle Ungar, and Sharath Chandra Guntuku. 2022. Social media reveals urban-rural differences in stress across china. ICWSM. +Sam de Boise. 2016. Contesting 'sex' and 'gender' difference in emotions through music use in the uk. Journal of Gender Studies, 25(1):66-84. +Munmun De Choudhury and Sushovan De. 2014. Mental health discourse on reddit: Self-disclosure, social support, and anonymity. In *Eighth international AAAI conference on weblogs and social media*. +Munmun De Choudhury and Emre Kiciman. 2017. The language of social support in social media and its effect on suicidal ideation risk. In Proceedings of the International AAAI Conference on Web and Social Media, volume 11, pages 32-41. + +Tia DeNora. 1999. Music as a technology of the self. Poetics, 27(1):31-56. +Tia DeNora. 2000. *Music in everyday life*. Cambridge University Press. +Stuart Dredge. 2022. Netease cloud music reveals its revenues grew $43\%$ in 2021. +Tuomas Eerola, Rafael Ferrer, and Vinoo Alluri. 2012. Timbre and affect dimensions: Evidence from affect and similarity ratings and acoustic correlates of isolated instrument sounds. *Music Perception: An Interdisciplinary Journal*, 30(1):49-70. +Tuomas Eerola, Anders Friberg, and Roberto Bresin. 2013. Emotional expression in music: contribution, linearity, and additivity of primary musical cues. Frontiers in psychology, 4:487. +Daniel PW Ellis. 2007. Beat tracking by dynamic programming. Journal of New Music Research, 36(1):51-60. +Sindhu Kiranmai Ernala, Tristan Labetoulle, Fred Bane, Michael L Birnbaum, Asra F Rizvi, John M Kane, and Munmun De Choudhury. 2018. Characterizing audience engagement and assessing its impact on social media disclosures of mental illnesses. In Twelfth international AAAI conference on web and social media. +Amir Feder, Katherine A Keith, Emaad Manzoor, Reid Pryzant, Dhanya Sridhar, Zach Wood-Doughty, Jacob Eisenstein, Justin Grimmer, Roi Reichart, Margaret E Roberts, et al. 2021. Causal inference in natural language processing: Estimation, prediction, interpretation and beyond. arXiv preprint arXiv:2109.00725. +Cristina Fernández, Juan C Pascual, Joaquim Soler, Matilde Elices, Maria J Portella, and Enrique Fernández-Abascal. 2012. Physiological responses induced by emotion-eliciting films. Applied psychophysiology and biofeedback, 37(2):73-79. +Frederic Font, Tim Brookes, George Fazekas, Martin Guerber, Amaury La Burthe, David Plans, Mark d. Plumbley, Meir Shaashua, Wenwu Wang, and Xavier Serra. 2016. audio commons: bringing creative commons audio content to the creative industries. journal of the audio engineering society. +Alf Gabrielsson and Siv Lindstrom. 2010. Strong experiences with music. Handbook of music and emotion: Theory, research, applications, pages 547-574. +Sandra Garrido, Catherine J Stevens, Esther Chang, Laura Dunne, and Janette Perz. 2018. Music and dementia: individual differences in response to personalized playlists. Journal of Alzheimer's Disease, 64(3):933-941. +Linda K George, Dan G Blazer, Dana C Hughes, and Nancy Fowler. 1989. Social support and the outcome of major depression. The British Journal of Psychiatry, 154(4):478-485. + +Scott A Golder and Michael W Macy. 2011. Diurnal and seasonal mood vary with work, sleep, and daylength across diverse cultures. Science, 333(6051):1878-1881. +Patrick Gomez and Brigitta Danuser. 2007. Relationships between musical structure and psychophysiological measures of emotion. Emotion, 7(2):377. +Juan Sebastián Gómez-Canón, Estefania Cano, Tuomas Eerola, Perfecto Herrera, Xiao Hu, Yi-Hsuan Yang, and Emilia Gómez. 2021. Music emotion recognition: Toward new, robust standards in personalized and context-sensitive applications. IEEE Signal Processing Magazine, 38(6):106-114. +Alinka E Greasley and Alexandra Lamont. 2011. Exploring engagement with music in everyday life using experience sampling methodology. Musicae Scientiae, 15(1):45-71. +Andrew H Gregory and Nicholas Varney. 1996. Cross-cultural comparisons in the affective response to music. Psychology of Music, 24(1):47-52. +Tobias Greitemeyer. 2009. Effects of songs with prosocial lyrics on prosocial behavior: Further evidence and a mediating mechanism. *Personality and Social Psychology Bulletin*, 35(11):1500–1511. +Nicolas Felipe Gutierrez Paez, Juan Sebastián Gomez-Canon, Lorenzo Porcaro, Patricia Santos, Davinia Hernandez-Leo, and Emilia Gomez. 2021. Emotion annotation of music: A citizen science approach. In International Conference on Collaboration Technologies and Social Computing, pages 51-66. Springer. +Julia C Hailstone, Rohani Omar, Susie MD Henley, Chris Frost, Michael G Kenward, and Jason D Warren. 2009. It's not what you play, it's how you play it: Timbre affects perception of emotion in music. Quarterly journal of experimental psychology, 62(11):2141-2155. +Keith Harrigian, Carlos Aguirre, and Mark Dredze. 2020. On the state of social media data for mental health research. arXiv preprint arXiv:2011.05233. +Allison G Harvey. 2008. Sleep and circadian rhythms in bipolar disorder: seeking synchrony, harmony, and regulation. American journal of psychiatry, 165(7):820-829. +Shigeko Hatano and Takeo Hashimoto. 2000. Booming index as a measure for evaluating booming sensation. In Proc. Inter-Noise, 233, pages 1-6. +Kate Hevner. 1935. Expression in music: a discussion of experimental studies and theories. Psychological review, 42(2):186. +Andrew C High and James Price Dillard. 2012. A review and meta-analysis of person-centered messages and social support outcomes. Communication Studies, 63(1):99-118. + +Yoshiyuki Hirano, Masafumi Fujita, Kazuko Watanabe, Masami Niwa, Toru Takahashi, Masayuki Kanematsu, Yasushi Ido, Mihoko Tomida, and Minoru Onozuka. 2006. Effect of unpleasant loud noise on hippocampal activities during picture encoding: an fmri study. Brain and cognition, 61(3):280-285. +Annabell Ho, Jeff Hancock, and Adam S Miner. 2018. Psychological, relational, and emotional effects of self-disclosure after conversations with a chatbot. Journal of Communication, 68(4):712-733. +Xiao Hu, Fanjie Li, and Tzi-Dong Jeremy Ng. 2018. On the relationships between music-induced emotion and physiological signals. In ISMIR, pages 362-369. +Chin-Lan Huang, Cindy K Chung, Natalie Hui, Yi-Cheng Lin, Yi-Tai Seih, Ben CP Lam, Wei-Chuan Chen, Michael H Bond, and James W Pennebaker. 2012. The development of the chinese linguistic inquiry and word count dictionary. Chinese Journal of Psychology. +Yueqin Huang, YU Wang, Hong Wang, Zhaorui Liu, Xin Yu, Jie Yan, Yaqin Yu, Changgui Kou, Xiufeng Xu, Jin Lu, et al. 2019. Prevalence of mental disorders in china: a cross-sectional epidemiological study. The Lancet Psychiatry, 6(3):211-224. +Patrick G Hunter and E Glenn Schellenberg. 2010. Music and emotion. In *Music perception*, pages 129-164. Springer. +Guido W Imbens. 2004. Nonparametric estimation of average treatment effects under exogeneity: A review. Review of Economics and statistics, 86(1):4-29. +Tariqullah Jan and Wenwu Wang. 2012. Blind reverberation time estimation based on laplace distribution. In 2012 Proceedings of the 20th European Signal Processing Conference (EUSIPCO), pages 2050-2054. IEEE. +Kaja L Jensen. 2001. The effects of selected classical music on self-disclosure. Journal of music therapy, 38(1):2-27. +Patrik N. Juslin. 2013. From everyday emotions to aesthetic emotions: Towards a unified theory of musical emotions. Physics of Life Reviews, 10(3):235-266. +Patrik N Juslin. 2019. Musical emotions explained: Unlocking the secrets of musical affect. Oxford University Press, USA. +Patrik N Juslin and Petri Laukka. 2004. Expression, perception, and induction of musical emotions: A review and a questionnaire study of everyday listening. Journal of new music research, 33(3):217-238. +Patrik N Juslin, Simon Liljestrom, Daniel Västfjäll, Gonçalo Barradas, and Ana Silva. 2008. An experience sampling study of emotional reactions to music: listener, music, and situation. Emotion, 8(5):668. + +Patrik N Juslin and Daniel Västfjäll. 2008. Emotional responses to music: The need to consider underlying mechanisms. Behavioral and brain sciences, 31(5):559-575. +Stuart B Kamenetsky, David S Hill, and Sandra E Trehub. 1997. Effect of tempo and dynamics on the perception of emotion in music. *Psychology of Music*, 25(2):149-160. +Patrick Kennedy. 2013. The relationship between prosocial music and helping behaviour and its mediators: An irish college sample. Journal of European Psychology Students, 4(1). +Stefan Koelsch. 2014. Brain correlates of music-evoked emotions. Nature Reviews Neuroscience, 15(3):170-180. +Joseph A Konstan and John Riedl. 2012. Recommender systems: from algorithms to user experience. User modeling and user-adapted interaction, 22(1):101-123. +Herbert E Krugman. 1943. Affective response to music as a function of familiarity. The Journal of Abnormal and Social Psychology, 38(3):388. +Carol L Krumhansl. 2001. Cognitive foundations of musical pitch, volume 17. Oxford University Press. +Brian Larson. 2017. Gender as a variable in natural-language processing: Ethical considerations. In Proceedings of the First ACL Workshop on Ethics in Natural Language Processing, pages 1-11, Valencia, Spain. Association for Computational Linguistics. +Ying Liu, Guangyuan Liu, Dongtao Wei, Qiang Li, Guangjie Yuan, Shifu Wu, Gaoyuan Wang, and Xingcong Zhao. 2018. Effects of musical tempo on musicians' and non-musicians' emotional experience when listening to music. Frontiers in Psychology, page 2118. +Jared K Lunceford and Marie Davidian. 2004. Stratification and weighting via the propensity score in estimation of causal treatment effects: a comparative study. Statistics in medicine, 23(19):2937-2960. +Lars-Olov Lundqvist, Fredrik Carlsson, Per Hilmerson, and Patrik N Juslin. 2009. Emotional responses to music: Experience, expression, and physiology. Psychology of music, 37(1):61-90. +Iris B Mauss and Michael D Robinson. 2009. Measures of emotion: A review. Cognition and emotion, 23(2):209-237. +Rollin McCraty, Bob Barrios-Choplin, Michael Atkinson, and Dana Tomasino. 1998. The effects of different types of music on mood, tension, and mental clarity. Alternative therapies in health and medicine, 4(1):75-84. + +Brian McFee, Colin Raffel, Dawen Liang, Daniel P Ellis, Matt McVicar, Eric Battenberg, and Oriol Nieto. 2015. librosa: Audio and music signal analysis in python. In Proceedings of the 14th python in science conference, volume 8, pages 18-25. Citeseer. +Leonard B Meyer. 1956. Emotion and meaning in music. University of Chicago Press. +Rada Mihalcea and Carlo Strapparava. 2012. *Lyrics, music, and emotions*. In *Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning*, pages 590-599. +Vincent Miller. 2015. Resonance as a social phenomenon. Sociological Research Online, 20(2):58-70. +Saif Mohammad, Xiaodan Zhu, and Joel Martin. 2014. Semantic role labeling of emotions in tweets. In Proceedings of the 5th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 32-41, Baltimore, Maryland. Association for Computational Linguistics. +Saif M Mohammad. 2020. Practical and ethical considerations in the effective use of emotion and sentiment lexicons. arXiv preprint arXiv:2011.03492. +National Bureau of Statistics of China. 2021. Main data of the seventh national population census. +Lauri Paltemaa, Juha A Vuori, Mikael Mattlin, and Jouko Katajisto. 2020. Meta-information censorship and the creation of the chinanet bubble. Information, Communication & Society, 23(14):2064-2080. +Michael Paul. 2017. Feature selection as causal inference: Experiments with text classification. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 163-172. +Andy Pearce, Tim Brookes, and Russell Mason. 2017. First prototype of timbral characterisation tool for semantically annotating non-musical. Audio Commons project deliverable D, 5. +Andy Pearce, Tim Brookes, and Russell Mason. 2019. Modelling timbral hardness. Applied Sciences, 9(3):466. +James W Pennebaker. 2011. The secret life of pronouns. New Scientist, 211(2828):42-45. +James W Pennebaker, Ryan L Boyd, Kayla Jordan, and Kate Blackburn. 2015. The development and psychometric properties of liwc2015. Technical report. +John Rawls. 2001. Justice as fairness: A restatement. Harvard University Press. +Claudio Robazza, Cristina Macaluso, and Valentina D'Urso. 1994. Emotional reactions to music by gender, age, and expertise. *Perceptual and Motor skills*, 79(2):939-944. + +James Robert, Marc Webbie, et al. 2018. Pydub. +Hartmut Rosa. 2019. Resonance: A sociology of our relationship to the world. John Wiley & Sons. +Paul R Rosenbaum and Donald B Rubin. 1983. The central role of the propensity score in observational studies for causal effects. Biometrika, 70(1):41-55. +Paul R Rosenbaum and Donald B Rubin. 1984. Reducing bias in observational studies using subclassification on the propensity score. Journal of the American statistical Association, 79(387):516-524. +James A Russell. 1980. A circumplex model of affect. Journal of personality and social psychology, 39(6):1161. +Nicolas Ruth. 2017. "heal the world": A field experiment on the effects of music with prosocial lyrics on prosocial behavior. *Psychology of Music*, 45(2):298-304. +Suvi Saarikallio, Sirke Nieminen, and Elvira Brattico. 2013. Affective reactions to musical stimuli reflect emotional use of music in everyday life. Musicae Scientiae, 17(1):27-39. +Katharina Schäfer, Suvi Saarikallio, and Tuomas Eerola. 2020. Music may reduce loneliness and act as social surrogate for a friend: evidence from an experimental listening study. *Music & Science*, 3:2059204320935709. +Karl Schriewer and Grzegorz Bulaj. 2016. Music streaming services as adjunct therapies for depression, anxiety, and bipolar symptoms: convergence of digital technologies, mobile apps, emotions, and global mental health. Frontiers in public health, 4:217. +Emery Schubert. 2004. Modeling perceived emotion with continuous musical features. *Music perception*, 21(4):561-585. +Ignacio Siles, Andrés Segura-Castillo, Mónica Sancho, and Ricardo Solís-Quesada. 2019. Genres as social affect: Cultivating moods and emotions through playlists on Spotify. *Social Media+ Society*, 5(2):2056305119847514. +Ashudeep Singh, Yoni Halpern, Nithum Thain, Konstantina Christakopoulou, E Chi, Jilin Chen, and Alex Beutel. 2020. Building healthy recommendation sequences for everyone: A safe reinforcement learning approach. In Proceedings of the FAccTRec Workshop, Online, pages 26-27. +John A Sloboda, Susan A O'Neill, and Antonia Ivaldi. 2001. Functions of music in everyday life: An exploratory study using the experience sampling method. Musicae scientiae, 5(1):9-32. +John A Sloboda and Susan A O'Neill. 2001. Emotions in everyday listening to music. *Music and emotion: Theory and research*, 8:415-429. + +Joanna Stewart, Sandra Garrido, Cherry Hense, and Katrina McFerran. 2019. Music use for mood regulation: Self-awareness and conscious listening choices in young people with tendencies to depression. Frontiers in psychology, 10:1199. +Daniel J Taylor, Kenneth L Lichstein, H Heath Durrence, Brant W Reidel, and Andrew J Bush. 2005. Epidemiology of insomnia, depression, and anxiety. Sleep, 28(11):1457-1464. +R Jay Turner, B Gail Frankel, and Deborah M Levin. 1983. Social support: Conceptualization, measurement, and implications for mental health. Research in community & mental health. +Marjolein D Van der Zwaag, Joyce HDM Westerink, and Egon L Van den Broek. 2011. Emotional and psychophysiological responses to tempo, mode, and percussiveness. Musicae Scientiae, 15(2):250-269. +Pantelis N Vassilakis and K Fitz. 2007. Sra: A web-based research tool for spectral and roughness analysis of sound signals. In Proceedings of the 4th Sound and Music Computing (SMC) Conference, pages 319-325. +Daniel Västfjäll, Patrik N Juslin, and Terry Hartig. 2012. Music, subjective wellbeing, and health: The role of everyday emotions. *Music, health, and wellbeing*, pages 405-423. +Juha Antero Vuori and Lauri Paltemaa. 2015. The lexicon of fear: Chinese internet control practice in sina weibo microblog censorship. *Surveillance & society*, 13(3/4):400-421. +Jonna K Vuoskoski and Tuomas Eerola. 2015. Extra-musical information contributes to emotions induced by music. *Psychology of Music*, 43(2):262-274. +Zachary Wallmark, Marco Iacoboni, Choi Deblieck, and Roger A Kendall. 2018. Embodied listening and timbre: Perceptual, acoustical, and neural correlates. *Music Perception: An Interdisciplinary Journal*, 35(3):332-363. +Han Wang and RongRong Fu. 2020. Exploring user experience of music social mode-take netease cloud music as an example. In International Conference on Applied Human Factors and Ergonomics, pages 993-999. Springer. +Shaojung Sharon Wang. 2012. China's internet lexicon: Symbolic meaning and commoditization of grass mud horse in the harmonious society. First Monday. +Gregory D Webster and Catherine G Weir. 2005. Emotional responses to music: Interactive effects of mode, texture, and tempo. Motivation and Emotion, 29(1):19-39. +Diyi Yang, Zheng Yao, Joseph Seering, and Robert Kraut. 2019. The channel matters: Self-disclosure, + +reciprocity and social support in online cancer support groups. In Proceedings of the 2019 chi conference on human factors in computing systems, pages 1-15. +Yi-Hsuan Yang, Ya-Fan Su, Yu-Ching Lin, and Homer H Chen. 2007. Music emotion recognition: The role of individuality. In Proceedings of the international workshop on Human-centered multimedia, pages 13-22. +Yingying Ye, Tingting Long, Cuizhen Liu, and Dan Xu. 2020. The effect of emotion on prosocial tendency: the moderating effect of epidemic severity under the outbreak of COVID-19. Frontiers in psychology, 11:588701. +Liang-Chih Yu, Lung-Hao Lee, Shuai Hao, Jin Wang, Yunchao He, Jun Hu, K. Robert Lai, and Xuejie Zhang. 2016. Building Chinese affective resources in valence-arousal dimensions. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 540-545, San Diego, California. Association for Computational Linguistics. +Zhenkun Zhou, Ke Xu, and Jichang Zhao. 2018. Homophily of music listening in online social networks of china. Social Networks, 55:160-169. +Eberhard Zwicker and Hugo Fastl. 2013. Psychoacoustics: Facts and models, volume 22. Springer Science & Business Media. + +# Overview of Appendix + +We provide, as supplementary material, additional information about our dataset, annotation guidelines, preprocessing details, and expanded results across all experiments. + +# A Data + +This section describes summary statistics of our data, as well as a view of the platform's user interface. + +# A.1 Platform Interface + +Users are able to interact with the platform through their browsers, native OS applications, and phone apps. Screenshot of a song's interface are shown in Figure 7, as is a view of the iOS application's commenting page for a song. + +# A.2 Users + +User age, gender, and region distributions (Figure 8) show that the majority of users are young men that hail from major metropolitan areas. The top 20 regions that users hail from are, in descending order, Beijing $(4.71\%)$ , Guangzhou $(4.22\%)$ , Shanghai $(3.80\%)$ , Chengdu $(3.47\%)$ , Shenzhen $(2.58\%)$ , Chongqing $(2.56\%)$ , Nanjing $(2.51\%)$ , Wuhan $(2.43\%)$ , Hangzhou $(2.21\%)$ , Changsha $(1.95\%)$ , Xian, $(1.91\%)$ , Overseas-Other $(1.77\%)$ , Zhengzhou $(1.54\%)$ , Hefei $(1.30\%)$ , Tianjin $(1.28\%)$ , Suzhou $(1.22\%)$ , Kunming $(1.20\%)$ , Urumqi $(1.19\%)$ , Jinan $(1.03\%)$ , Fuzhou $(0.99\%)$ , Qingdao $(0.91\%)$ , Nanning $(0.89\%)$ , Nanchang $(0.88\%)$ , Shenyang $(0.83\%)$ , Harbin $(0.83\%)$ , Foshan $(0.77\%)$ , Dongguan $(0.74\%)$ , Guiyang $(0.74\%)$ , Shijiazhuang $(0.73\%)$ , and Ningbo $(0.68\%)$ . General trends for user gender and region taken in the context of user ages mirror population trends indicated by the Chinese Census (National Bureau of Statistics of China, 2021). + +# A.3 Songs + +Song comment and comment token distributions are shown in Figure 9; lyric preprocessing and topic modeling details are in Appendix Section C. + +# A.4 Playlists + +- Playlist comment distributions and comment token distributions are shown in Figure 10. The top 20 most popular tags used for playlists are, in descending order, Language-Western (21.5%), Style-Pop (19.2%), Language-Chinese (15.1%). + +
GroupValence (m./std.)Arousal (m./std.)
Women5.93/1.335.23/1.09
Men5.72/1.385.14/1.13
10后5.76/1.375.17/1.12
05后5.79/1.405.21/1.10
00后5.85/1.385.22/1.10
95后5.80/1.365.17/1.11
90后5.76/1.345.12/1.11
85后5.74/1.365.13/1.12
80后5.79/1.325.10/1.11
75后5.75/1.325.08/1.10
70后5.81/1.315.09/1.09
65后5.81/1.335.13/1.11
60后5.91/1.335.15/1.10
55后5.81/1.365.18/1.11
50后5.77/1.405.19/1.11
+ +Table 2: Comment valence and arousal mean (m.) and standard deviations (std.) for demographic groups on gender and age. + +Style-Electronic (13.4%), Emotion-Healing (7.66%), Theme-ACG (7.56%), Setting-Night (7.03%), Style-Soft (6.56%), Theme-Games (6.18%), Theme-Movies (6.10%), Emotion-Relaxing (5.87%), Emotion-Exciting (5.33%), Style-Rock (5.32%), Style-Rap (5.08%), Emotion-Nostalgic (4.91%), Emotion-Quiet (4.65%), Setting-Study (4.50%), Theme-Classics (4.11%), and Emotion-Sadness (4.09%). Note here that as playlists may each contain at most three tags, summing percentages for all tags exceeds 100%. + +# A.5 Albums + +Album comment, comment token, and release date distributions are shown in Figure 11. Songs with at least one comment show exponential bias towards recently released music. + +# A.6 Artists + +A distribution of the number of albums and songs per artist is shown in Figure 12. + +# A.7 Demographic Baselines + +Users of different demographic groups have varying comment valence and arousal means and standard deviations. These statistics, stratified by demographic groups on gender and age, are shown in Table 2. + +# B Emotion Annotation Guidelines + +This section describes the annotation guidelines used by annotators in our pilot studies to determine, in top-level comments, (1) the emotion experiencer, or who was the primary experiencer of the emotions expressed in comments, and (2) the affective stimulus of the emotion expressed in the comment. Annotators consisted of two Chinese native speakers and were asked to annotate a set of 1000 randomly selected comments on the platform. + +Annotators were first tasked with familiarizing themselves with the BRECVEMA framework of musically evoked emotions (Juslin, 2013) before being presented with the following questionnaires for annotation: + +# Question 1: The Emotion Experiencer + +Comment: 真特么的带感这曲子! + +Q. Who was the primary experiencer of the emotion expressed in the comment? + +- The commenter themselves. +- Someone other than the commenter themselves. +- This comment possesses no emotional content. + +# Question 2: The Affective Stimulus + +Comment: 真特么的带感这曲子! + +Q. What was the primary affective stimulus of the emotion expressed in the comment? + +- The song, album, or playlist. +- Something other than the song, album, or playlist. +- This comment possesses no emotional content. + +As stated, annotators were asked to resolve initial annotation disagreements through discussion in order to come up with a set of annotations that both agreed on. + +# C Lyric Topics and Preprocessing + +This section describes our lyrical preprocessing methods and 20-topic LDA model results on song lyrics. + +Preprocessing. We first identify instrumental music by matching lyric data on the substring 纯音乐, used by the platform to denote songs of this category. For non-instrumental pieces, we filter + +out lines with song metadata (e.g. composers) by removing lines that match the following regex: + +:|||《|》|produced by|vocalsby|recordedby|editedby|mixedby|masteredby|- + +As repeated lyrics are denoted with overlapping time stamps, e.g. + +[1:00.00][2:00.00] 雨淋湿了天空 + +indicates that the line 雨淋湿了天空 is repeated at minutes 1 and 2), we further unfurl and reorder lines by timestamp, duplicating lines when necessary. Further tokenization details are shown in Text Preprocessing. + +Topic Modeling. We train a 20-topic LDA model on preprocessed song lyrics and manually label each lyric with its prominent theme. While some degree of variation exists for listener affective responses across songs of each topic, these topic distributions are primarily used as lyrical content controls in our regression models. Labeled topics and their top words are shown in Table 3. + +# D Text Preprocessing + +This section describes our text preprocessing pipeline for all text data on the platform, namely (1) lyrics and (2) listener comments. + +Preprocessing. We analyze only Chinese language content, using Google's Compact Language Detector v3 (gc1d3 $^9$ ) to detect text language and keep only Chinese language texts. We then convert all traditional Chinese characters to their simplified forms using hanziconv $^{10}$ to ensure consistency in our experiments—i.e. when calculating LIWC scores, for which we use the simplified Chinese version (Huang et al., 2012)—and finally tokenize with jieba. $^{11}$ + +Filtering for Affective Content. Following annotations of listener comments, we filter out all comments that match the following regular expressions (Table 4), aiming to increase the precision of comments in our analysis that indicate an affective response. The following filters generally match with easily identifiable spam messages, i.e. "first comment", album images, and quotations. + +沙发,第一,第二,第三,第四,第五,第六,第七,第八,第九,第十,第1,第2,第3,第4,第5,第6,第7,第8,第9,一楼,留名,封面,没人,来晚了,板凳,求,前排,识曲,后排一条,好少,不火,助攻,作者,评论,人呢来了,\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* + +Table 4: Regular expressions used to filter out irrelevant listener comments that do not indicate an affective response. + +# E Expanded Results for Variations in Affective Response + +Expanded results for variations in affective responses are shown in Figures 13 and 14 for musical features, Figures 15-18 for LIWC lyrical features, Figures 19-23 for settings and other playlist tags, and Figures 24-26 for demographic effects of gender and age. + +# F Expanded Results for Disclosures of Mental Health Disorders + +Regular Expressions. We source mental health disorders from the DSM-5-TR (American Psychiatric Association, 2022) and construct regular expressions to identify possible comments that self-disclose a diagnosis of a mental health disorder; specific regular expressions are shown below in Table 5. These regular expressions return 2319 matched comments in total. + +Manual Filtering. Two Chinese native speakers then manually screened out comments that lacked a clear statement of self-disclosure. These comments primarily consisted of those that (1) described other people's diagnoses, i.e. relatives, friends, or celebrities, (2) described recovery from a disorder, (3) were of speculative nature on the diagnosis, e.g. "我觉得..." ("I think..."), (4) described what a diagnosis was but not necessarily that the listener themselves was diagnosed, and (5) only used diagnosis terms to exaggerate sentiment. Examples of positive and false positive comments are shown below in Table 6. Following manual annotation, $46.9\%$ of regular expression-matched comments were eliminated, leaving 1231 comments deemed as self-disclosures. + +精神分裂, 精分, 人格分裂, 妄想, 思觉失调, 强迫症, 创伤后应激, 情感障碍, 情绪障碍, 情绪失调, 躁狂, 狂躁, 躁郁, 双向, 双相, 抑郁, 忧郁, 重郁, 轻郁, 焦虑, 社交焦虑, 社焦, 社交恐惧, 人群恐惧, 社恐, 余恐, 恐惧, 恐慌, 广场恐惧, 分离焦虑, 缺默, 人格障碍, 躯体变形, 体象障碍, 适应障碍, 多重人格, 人格解体, 现实解体, 创伤性失忆, 解离性身份障碍, 躯体症状, 歇斯底里, 转换症, 转换障碍, 做作性障碍, 装病候群, 代理性孟乔森, 进食障碍, 摄食障碍, 神经性饮食失调, 反刍, 厌食, 贪食, 暴食, 暴饮暴食, 异食, 失眠, 睡眠障碍, 嗜睡, 睡眠相位后移, 快动眼睡眠, 睡瘫, 睡眠瘫痪, 梦魇症, 易怒症, 暴怒症, 行为障碍, 品行障碍, 偏执, 边缘性人格, 边缘性人格, 边缘型人格, 边缘性格, 做作. *人格, 自恋. *人格, 回避. *人格, 依赖. *人格, 精神病, 心理疾病 + +Table 5: Mental health disorder condition name strings; these are prefixed/suffixed with the strings for “diagnosed” (“确诊.*”) and “diagnosis” (“诊断.*”), i.e. “确诊抑郁” for “diagnosed with depression”, to act as initial regular expression filters for users who self-disclose a diagnosis of a mental health disorder. + +
Positives
今天,我被确诊为抑郁症了。去了大城市诊断的双相情感障碍524双子,确诊精分。诊断人格分裂。即使这样也要活下去啊!2017.12.25确诊焦虑症中度抑郁我很痛苦
False Positives
医院等待医生上班确诊我是否患有抑郁症整的我有点精神分裂症自我诊断上瘾了确诊中度抑郁症,现在已经走出来啦我已经几个朋友确诊抑郁了。我认知是抑郁症心境低落与其处境不相符
+ +Table 6: Examples of positive and false positive self-disclosure statements of mental health disorders encountered in our manual labeling of comments matched with regular expressions. Partial comments are shown. + +Disorder Matches. In total, 1133 users made self-disclosure statements. A breakdown of users by disorder class is shown in Table 7; note that the total users per class across all classes exceeds 1133 due to comorbidities. + +
Disclosed Disorder ClassMatched Users
Depressive920
Anxiety225
Bipolar and Related201
Schizophrenia Spectrum and Other Psychotic108
Sleep-Wake35
Personality18
Feeding and Eating11
Obsessive-Compulsive and Related5
Somatic Symptom and Related4
Dissociative1
Trauma- and Stressor-Related1
+ +Table 7: The number of users who self-disclose a mental health disorder, stratified over broad disorder classes. + +Diurnal User Activity. Stratifying user activity across hours and measuring the relative comments made per stratum, we observe that disclosers show greater platform activity in the AMs (1-5 AM) and around 11AM-5PM compared to the set of all users. Shown in Figure 5 below, these observations are consistent with insomnia-aligned diurnal user activity, prevalent in individuals diagnosed with affective disorders (Taylor et al., 2005; Harvey, 2008). Note here that due to platform data limitations, while comment dates are available for all comments on the platform, only those made in the past year had times recorded and, as a result, are what we use in our analysis here on diurnal user activity. Thus, it is important to interpret these in the context of the COVID-19 pandemic, which has caused an increase in the prevalence of anxiety and depression worldwide (Bareea et al., 2021). + +![](images/d2f13ff52bb81a1715ff54845254c2d9b63816b9363b8c19b8c7781f64025435.jpg) +Figure 5: Diurnal commenting activity between disclosureers and the set of all users. + +**Playlist Engagement.** Relative tagged playlist engagements are shown in Figure 6 for disclosers and the set of all users; these are expanded figures as noted in Section 5 of the main paper. Notably, disclosers show greater engagement with emotion tagged playlists; within emotional and setting tags, disclosers show overwhelmingly greater engagement with tagged playlists of sadder nature, i.e. loneliness $(+302\%)$ , sadness $(+158\%)$ , and night $(+50.1\%)$ , as well as decreased engagement with playlists of more active natures, e.g. exercise $(-51.7\%)$ . These observations mirror affective disorder activity trends (Cooney et al., 2013) and suggest that people with affective disorders are more likely to use music reflective of negative emotions than positive emotions to manage feelings of sadness and depression (Stewart et al., 2019). + +![](images/6c59e2b34ed37bc6d1f6cd2ca6c606423b4a9579321be7fa71c945d9aec71ca6.jpg) + +![](images/4a6a97d0147745083dd07b4c1c75278954f07ce812ee3d6f9d0ec0da650e7ed8.jpg) + +![](images/cc8680f6dc1aa83a78ae0146895163b7a160b63dffe2cfb38ee7e5b66cd8c867.jpg) + +![](images/8f4a23d78ab682d8f2dea5bb35fb83b0d8c405ec55afc76f50637df342dee876.jpg) + +![](images/21a862735e57d944f0dae3cfd8a5a2cb9c9e619b15af789b79bbeefa33932ecd.jpg) +Figure 6: Relative tagged playlist commenting activity between disclosers and the set of all users. A breakdown of engagement with the five broad tag categories is shown on the top left, while other figures show each category's relative tag engagements. Note that as each playlist may have up to three unique tags, relative tag percentages do not add up to $100\%$ . + +![](images/7cfcf252e8eb32337ca79fdc04109beaee7838e5d25436be3c2aa1af8d490048.jpg) + +![](images/e3b26bb268a892cc926dbac5a972f24e6a39893e2847820cda1c19add68ca27f.jpg) +Figure 7: Screenshots of the platform's in-browser web page interface (left), showing the description, lyrics, and comment board of a song, and iOS in-app interface (right), showing the comment board of a song. + +![](images/1201983f2b25b4f22e2b90908c7718477bcffa55aefb00e5c428c5d8940f2255.jpg) + +![](images/e9d520f108d5a408146b646b4df7778bd4e85b87f7f474c80f255b854194a2df.jpg) +Figure 8: User age distributions (left) according to the decade of birth (i.e. 00后 for those born in the 2000s), and user gender distributions (right) across platform-available choices for gender. Here, NA implies that the user omitted to input gender information during registration. + +![](images/8854bf67c52a5e838f228ca1ef85dd92e3a097f9c29b1641adc73a43885cb830.jpg) + +![](images/f2c81c3b0501894396e3851ffe179505dd8ebb00f334cb7d22c094d05eb96d3f.jpg) +Figure 9: Comment (left) and comment token (right) distributions across all songs with at least one comment. + +![](images/be17c4b91b1b44d353c0d9c11c5d9797eec4aebcbd0acef70559b85b71e6b62c.jpg) + +
Topic ThemeTop Tokens
0. Romance/Sentiment (爱情/感性/伤感)爱,未,今天,便,似,一生,令,心,没,心中,里,太,愿,仍然,想,没法, 一起,讲,吻,快乐
1. Youth/Hope/Warmth (青春/希望/阳光)梦想,希望,地方,梦,世界,身旁,远方,青春,路,模样,时光,方向, 未来,流浪,勇敢,阳光,带,温暖,生命,心中
2. Transcendental (人生/社会/超俗)人间,江湖,天地,皆,天下,少年,山河,剑,生死,笑,问,间,世间,道, 万里,便,江山,英雄,合,此生
3. Hometown/Childhood (故乡/童年)花,家,牵挂,长大,噢,回家,记得,回答,说话,画,天涯,挣扎,走,呐, 害怕,变化,落下,傻,年华,故事
4. Friendship/Hedonism (享受/欲望/世俗)吃,不要,没,兄弟,快,音乐,钱,没有,起来,新,听,带,买,今天,走, 站,玩,现在,喝,说唱
5. Love/Lust (恋爱/情欲)喔,女,男,合,阮,喝,一杯,一半,讲,耶,人生,爱,伊,酒,甲,拢, 唤呀,啊啊啊,心爱,搁
6. Memories/Regret (从前/失望)没有,想,没,不会,知道,现在,不想,已经,生活,里,太,真的,想要, 时间,总是,听,曾经,其实,不能,一直
7. Nature/Spring (阳光/故乡/自然/草原)唱,美丽,姑娘,飞,月亮,草原,歌,吹,故乡,春天,开,歌声,轻轻, 歌唱,亲爱,太阳,一片,花儿,远方,阳光
8. Breakups/Sadness (分手/情感/失恋)走,没有,爱,手,寂寞,温柔,快乐,不要,懂,回头,以后,梦,朋友, 难过,自由,不会,最后,记得,沉默,拥有
9. Nostalgia/Melancholy (桑感/忧愁/思念/思乡)相思,一曲,醉,听,落,岁月,梦,红尘,明月,桃花,笑,人间,花,叹, 不见,故人,春风,似,间,清风,见
10. Heartbreak/Loneliness (爱情/失恋/孤独/伤心)爱,心,爱情,眼泪,哭,太,寂寞,不要,泪,越,女人,心碎,恨,伤,美 幸福,想,错,后悔,不会
11. Wistful/Sentimental (思念/孤独)梦,一生,情,心,愿,心中,今生,难,梦里,往事,雨,问,岁月,泪, 匆匆,人生,如今,相逢,相思,风雨
12. Family/Longing (家庭/思念)妈妈,唔,哒,想,喵,好想你,爸爸,,宝贝,话,,滴,摇,快,系,滴答, 讲,一只,嗯,笑
13. Celestial/Awe (孤独/渺小)里,风,天空,故事,听,记忆,温柔,城市,雨,回忆,梦,黑夜,时光, 相遇,声音,风景,夜空,梦境,流星
14. Love (爱情)爱,想,爱情,心,忘记,没有,离开,永远,等待,明白,回忆,不会, 未来,不要,我爱你,相信,一起,不能,愿意,一次
15. Countryside/Family (乡村/山水)呦,哥哥,嗨,里,妹妹,哥,走,转,白,长,嘞,耀,红,山,开,水,见,笑, 送
16. Blossoming/Joy (爱情/幸福)想,一起,喜欢,陪,爱,知道,想要,笑,世界,微笑,拥抱,感觉,慢慢, 眼睛,听,心,心里,我要,带,幸福
17. Nationalism/China (爱国)中国,恭喜,新,菩萨,熘,南无,祖国,祝,北京,人民,英雄,来来来, 新年,吼,平安,东方,历史,阿弥陀佛,祝福,菩提
18. Time/Nihilism (时间/转瞬即逝)一天,时间,再见,身边,永远,脸,世界,改变,思念,从前,想念, 明天,远,出现,看见,回忆,昨天,一点,一遍,一年
19. Being/Existential (存在/生命的意义)世界,无法,灵魂,现实,需要,不断,成为,黑暗,继续,命运,生命, 身体,内心,像是,保持,自我,有人,每个,孤独,自由
+ +Table 3: Manually labeled lyrical topics and their top tokens, as captured from a 20-topic LDA model trained on preprocessed song lyrics. + +![](images/ec62c88ae1c6be4b9d9d47aa2486ba955cf225f243f66b58ec0f80a70e689ef3.jpg) +Figure 10: Comment (left) and comment token (right) distributions across all playlists with at least one comment. + +![](images/a27f149ccb18ec03343b055c88fae1739ad5419c99eb31aed07a534603197fb3.jpg) + +![](images/0cf9f0c3db62231e5b985236ef2046e6eeb4a352b7bcd5008868f008aeca3eea.jpg) +Figure 11: Comment (left) and comment token (middle) distributions across all albums with at least one comment, as well as album release date distributions (right). + +![](images/1092f294176ff92a74d627eb8bc8ffe4faabccb616cb500ff47bffae359168c5.jpg) + +![](images/d1a14f556e6bd9baaa9680b382d377d4a85bb11404b03d23470ae0a8a0345d58.jpg) + +![](images/19ceb93ab92472071d4bc1554c76b1df4f49a8ece8de0e8947eb99126647a26c.jpg) +Figure 12: Song (left) and album (right) distributions per artist across all artists. Platform-listed artists with the highest amount of songs and albums are generic compilations of multiple artists, e.g. "华语群星" ("Chinese stars"). + +![](images/238101b03c9b65d29314c3f4c734f88f39a299ff85b4b41386a4732608ee5748.jpg) + +![](images/3ee6a6de1d2ed4963e09f06cb61513d937a8d828051ea769d2f5100300ea2f4a.jpg) + +![](images/1f3fdd8e0d3d5b09caf36431803fd076728476a3fa12d1b2caaab37286cb686a.jpg) + +![](images/4bc9279ab56e403afa5879d6d17ba2bcaa722e15f4d321a6314f6328c804a7fa.jpg) + +![](images/0979eb426f7acf1bcdb66a38fa047565dd4ac88c4bdbd1c285a3712b224b25ea.jpg) + +![](images/9e654fa7ef8d43048914f4d87ef45c1aa5960326046293ae12ba6c4ba1eceff9.jpg) + +![](images/4175135e7244d40cefbefdbc743514474a7d8deaea502e853f755f30de549925.jpg) + +![](images/c05bbc923706d5a7e7353e1dc14d9aeabaa0877c3460614ee9691a02726eb6e1.jpg) + +![](images/03da8fc96afa023c67cec08325a4f867551930f83e72fad35f2c06c06fce05e9.jpg) + +![](images/7d1718a753c667f224018cada4c50e4e0743b65c74339a85880870decceb2335.jpg) + +![](images/33d6faf671f7448641901e9b5549be70d3c2437e019a402d10623223bfbf469d.jpg) +Figure 13: Average marginal effects of musical features on listener affective responses, controlling for lyrical features and listener demographics. Standard errors are shown; valence in red, arousal in blue. + +![](images/eec36f2ecbb5bffab22acdc8110386292b350a8b6e334610ec89e431beb60b31.jpg) + +![](images/e9c17bd77b500b96dae58d91822409d8b2fa3c797a110a3d98fb4b3f99c0c103.jpg) + +![](images/652ecc151fa3cd9dca8b9034f81f30d1dd6c7542de06c198297ac0917613859a.jpg) +Figure 14: Raw valence and arousal scores for variations in listener affective responses with respect to key. Across all keys, valence response to major mode keys is consistently higher than that of their corresponding minor mode key, while the opposite relationship exists for arousal response. Standard errors are shown; valence in red, arousal in blue. + +![](images/71dae63dece6698d473ce350ef6bfb051ace6284ff1fe4ff0b3db84de33a99e6.jpg) + +![](images/be97fc7624f32c6cde9b1a0ce58a8eb272427ff1192a2a57c1e4f564b8f5807a.jpg) + +![](images/1654485185c495072c1170b704535c21b2f0e19a2257e21d03eb981053753201.jpg) + +![](images/39e2cbb62ba6d4786bcff8c3603182cf211c59dfce1052e287251577238f4843.jpg) + +![](images/2d580de7810933a184d2536e82a750a7376a90a9857c5d2bfd6f4f4058c833e3.jpg) + +![](images/b12f40c188f61af2143b53ac8039f76b723f0870eb14c718cd222bfd52b94971.jpg) + +![](images/51fbfbbc0e0a05d12b9b6ddfa60737987298608ea9f9357ef47349d6773d2889.jpg) + +![](images/7fc7ebeb0af97133d57d33ce8f8eb795ecca66fc1b7641f5ead776556d01897a.jpg) + +![](images/cf4b3b059cfef145e9cfc6de5447841c2f6be78ea49d088a7ff4c8678916d560.jpg) + +![](images/bc7abb923ef454bf78302b3ebfe2d61d70e66f27331dced2c1bf28583721d17c.jpg) + +![](images/efaf34f64bbcda34d254b52d90f0b3ed6db274fc93cb2150007bb80cf18b37cf.jpg) + +![](images/64cc7c53309fd3b30fc33b6363f1e453ba9f11fb78bf6033b27f65a6828934a5.jpg) + +![](images/1880c21b18800dbf14f10cf36842cd8928b01da28723b1f65a372378ad549457.jpg) + +![](images/a7da1833f721fc549991f6b00258c8e8c24c26013fb5b5a1e0ab1563188c4da8.jpg) + +![](images/4c37f5c0b901df648af769cbd3eea735a34411abfcad6fa3c4c18846d1b3f06b.jpg) + +![](images/f9bae1f573ea4679d5e15b88ac66676b1f55758edbbfc784a486432df95ae29a.jpg) + +![](images/fbcc906f5662923b58c18fd5fb6ff1736fecdbdc748d9c755f81f7057b31d1b3.jpg) + +![](images/70da0e5849fc03de84b477c5befa012a020e9bd170749d6b069bf7ddc47ece8b.jpg) + +![](images/20305095a37e0d20d7dfae577b30bf4abb966abc887ef0a8d107c68271dde3bf.jpg) +Figure 15: Average marginal effects of LIWC psycholinguistic lexical category lyrical features on listener affective responses, controlling for musical features and listener demographics. With the intent to reduce noise at the extremities, x-axis limits are capped at their $95\%$ quantile values. Arranged in alphabetical order, standard errors are shown; valence in red, arousal in blue (Part 1/4). + +![](images/75b324a05a30f96883576b0d5d084a7445c51dc27669e43e7589cb88813a915b.jpg) + +![](images/c9daa76bd38a8376b800f3979ac4159224a573a166a2d845c41a21bffd5822ec.jpg) + +![](images/8487818cde4a12d6411b5c7386a60255b95cc34aa52108ca7579cec7f2b57ade.jpg) + +![](images/007e377ec60bee9f413093b7a86c63bc772b9847e1c9b74d18358918692b2d91.jpg) + +![](images/8b4950a56fbec2040cedf2ee7c1e6ffcd140c986632cb80f3e8303e94ee2b212.jpg) + +![](images/1a7dafee3e8c09e4228708806d604b32e3349ec7ffb0c0150bc231199c5cf4a9.jpg) + +![](images/876b5d441dbeb9f5e16aed3e59cb65901315c0ec995dd2880ab146f83ae64a00.jpg) + +![](images/fa8c7e3e5108b9fa6b5e5c9a4d5ac9b613adf2dae0d4cfeac084c21a13e565fb.jpg) + +![](images/2b73528ff0578db9715228bffd33211e854bca4910b64bb6ebba54bf26d85a99.jpg) + +![](images/8b88d8d84806b5b0ea10400adef9caa52f9d64d8dd131af37276aae81ae09bc8.jpg) + +![](images/666393483753b78f4b077ec5c52948afda1d73b11b0e2d2f6b6dd481d0eb62b2.jpg) + +![](images/25e111a7be4b88a3194fef6378b892cafd9d654fbe252caeea86c7d92360b0d5.jpg) + +![](images/6e2b5eb54ac113c5895689169cb52bef4f4ca6d93dfada7bf9aa586d3ea137ec.jpg) + +![](images/c8322ea20f091a2337d1c48f701f049f29372f1bbbf2303328f41d7dd330b2a3.jpg) + +![](images/bb3c409fb4e4e6b379225aebd2a5e2b34420efa9fdf85c1fab66b3129ea48630.jpg) + +![](images/4976c1f8bf56b09caa1709b6383588fcf34fabb05b8482cbfabc5344e0717f4c.jpg) + +![](images/ad0b9d0e858f0c5dfba9aea7f2874eb63f3ea443257447a13ed3b42f5b3440da.jpg) + +![](images/feedb9f8c8f5f73ecab5c58b27565022413322ff88f23c3d54073b78769b89ad.jpg) + +![](images/bf046a5bd794178b677d95120a765b700524dae36ef2c1f259a53e1278b0e84b.jpg) + +![](images/bdbb7729bbd8717deed803060615b95afa21c10db7f55dcd8375c2ebbe6e4367.jpg) + +![](images/723a3e1f1a2d83b88489200a5eed2011ecccc9ef3d27db2a92a88a6fed7a7c68.jpg) +Figure 16: Average marginal effects of LIWC psycholinguistic lexical category lyrical features on listener affective responses, controlling for musical features and listener demographics. With the intent to reduce noise at the extremities, x-axis limits are capped at their $95\%$ quantile values. Arranged in alphabetical order, standard errors are shown; valence in red, arousal in blue (Part 2/4). + +![](images/6ed8c3e54ae7f2b07df37d22a9da6d71502409c85d00712805869a0a41af1c52.jpg) + +![](images/bd7afdf6c4f72518492aedc02e3aebcef0b10b662ff2b5799334cd47668e9794.jpg) + +![](images/c386174947fe6dd10dfbb95e2a9e455140f2d086b72a088a6e34691b172d4e43.jpg) + +![](images/ea1cb03ca2b4ac1c129adf22cc80bbad7f32b561ba3cda513857e2a0d69c1322.jpg) + +![](images/402c348e74735ad36e982fc1d22edb6c2fe96acfaf0afc1a28feb44bd358d846.jpg) + +![](images/0aeeb27a93c6b27599778c3347b2bc8970b2e33d26be7f24baf47b0ec986e2bd.jpg) + +![](images/214475e6c44b16072f3530b7e29f1006ba0206ff2eec80134381441b4a67efe8.jpg) + +![](images/a3bed49df44f555896050d68d7d6b99876447cc7066c420a9a39b19cbca0ebc3.jpg) + +![](images/61acbaaf86b4b2fb1f6fef2a14d1f05c9cceb0b3a1baf70e32b86a2f607d9c53.jpg) + +![](images/cf2b5e73720cf460f9d244756e870cabc80ceec3860a53ad29af6d44f9ec1e19.jpg) + +![](images/6853e72cd04ad39d810e6543904a3c779c66d72b64ffd400f3f34a99f950ac64.jpg) + +![](images/3bf309988024449ac0b45d086f1d70c32ed793d221afb19cd66bf3673dbff03e.jpg) + +![](images/9288fe43ee45a2dfb2c636e3d6301a73facfed8c8d4a40fc58f6dbb6e702b11f.jpg) + +![](images/fd341d3bd8ff4f1d864321b0dd4f042185c6ce785644c59177d0750257ecc3af.jpg) + +![](images/0c26747f130e796e9a261d0177c7dc465d056f5db2950b0863deeb9d935b5a04.jpg) + +![](images/156dc244f3c60874874d4b1082f88e32aff382f09fd91dc5e1a18be5f1d78a76.jpg) + +![](images/54d68cfb14c3d9d61c20b5f8bc043c740f451692e5e05c7f4c398eaa149c425e.jpg) + +![](images/c074ad897b764742fdd58a1ab440f2625a28d26c249c10196a8126afe96e14e5.jpg) + +![](images/adada477586d520fa386ae2db7856b8b96bb54e8b90bfcca1ed8bd888b18f2af.jpg) + +![](images/a954fbfe104d8fc6478a23b5e09b4d932129bdbbb13f1597f8cf2a1ef0f4a5a5.jpg) + +![](images/fef3a2f62053781fec5462c0d9123128cdaf3aea07f94a23d621c16f4c91e826.jpg) +Figure 17: Average marginal effects of LIWC psycholinguistic lexical category lyrical features on listener affective responses, controlling for musical features and listener demographics. With the intent to reduce noise at the extremities, x-axis limits are capped at their $95\%$ quantile values. Arranged in alphabetical order, standard errors are shown; valence in red, arousal in blue (Part 3/4). + +![](images/edb3ce345849337bc2551d1a44030c1c148138e0cd1f908e26faf40f815170fd.jpg) + +![](images/bbc0b1776425b76ff1133a18e05be3dc914dbe05aa6661fb898bd7478ee61f8f.jpg) + +![](images/b2bb3d937a1998861639e1216e2056b8679b732859fc04563598aec668a573a4.jpg) + +![](images/a809346c0068a54d3a77cc2cf1528509f26e5b091adf0cfa277722700288115a.jpg) + +![](images/13d36fe845c74493a92737ec73f009ffcf364b0963c946655dcee14ff4bcabfb.jpg) + +![](images/e48dc0870fd9409f8574e11c94ebf6058266b0f1700e0f71c684dece26a147dc.jpg) + +![](images/d9eb7f94425f85bda0278fa7d72abf56579996a7cff3bab7fd11514356719503.jpg) + +![](images/3d965032556f9ef7dca945ac78168c184e5f48e781c5fffea27cd69c3137b521.jpg) + +![](images/908d7b3302c8f0f416a0e72360f87342234dc3f3f43e988c331d757027531c73.jpg) + +![](images/f7dbb1ebe1c4f16149c1489bfe7b83d1a8a136974d84fbabf5890f504781f36d.jpg) + +![](images/d143a06e442c64e99a4f40118488415a31f4808b27b841bae9c999aa0c3f95ae.jpg) + +![](images/75b0bd079a821db71669847a084b1f3b367a7adf97eb794e622d3e4642b756d5.jpg) + +![](images/0a5d1eff3fd4eb4e5b7e0a729127fb50786fc3f66bd9b0dc1e55164c20629d5a.jpg) + +![](images/2287dc357c07f3b10f195f2a628852417986170520f73faa1744742a68b5aa5b.jpg) + +![](images/a72d3fb98e87f9827630028c5b823f9336f30095d3626bdc7239eb6a2e773548.jpg) +Figure 18: Average marginal effects of LIWC psycholinguistic lexical category lyrical features on listener affective responses, controlling for musical features and listener demographics. With the intent to reduce noise at the extremities, x-axis limits are capped at their $95\%$ quantile values. Arranged in alphabetical order, standard errors are shown; valence in red, arousal in blue (Part 4/4). + +![](images/a83d9583688cd156954c40f0d8e04a5b65ce70efacf274e23a279a2f85cc4026.jpg) + +![](images/54a46815282897f41af392749e4163b3df4f2177d4c8782a8843a426766f3d6d.jpg) +Figure 19: Average marginal effects of listening contexts in setting-tagged playlists on listener affective responses, controlling for songs and user demographic variables; standard errors are shown. + +![](images/37c9c9e74ee706b66ed521392f2ba273800c54100d7770bbe15f13f85ee4f0cc.jpg) + +![](images/b5bd70d3fdbfc9db6627dba128e5775d965e4aa85a55746a4190e48f8887c4db.jpg) +Figure 20: Average marginal effects of listening contexts in style-tagged playlists on listener affective responses, controlling for songs and user demographic variables; standard errors are shown. + +![](images/15e0d2643cc98e9376e22f0ea85e483d78884e919be691261ad3fa3a00eb81e1.jpg) + +![](images/0227b8df9dc5427408e7cac977c5bc9d6e2373c3805cdffdf2761f4a81755aa5.jpg) +Figure 21: Average marginal effects of listening contexts in emotion-tagged playlists on listener affective responses, controlling for songs and user demographic variables; standard errors are shown. + +![](images/79bdb044b0217b4259b5c3bdebb93793b4ca563c394f1778093a7d33e8bc21bb.jpg) + +![](images/6405f24216941087a9bac4c08ccb32c7a7912aab03c675a187062bf9fc9ef703.jpg) +Figure 22: Average marginal effects of listening contexts in theme-tagged playlists on listener affective responses, controlling for songs and user demographic variables; standard errors are shown. + +![](images/f00da6c01a161a0826677d144cfd7695aed94bf7c2381bf2ee2cc054ea61dbd2.jpg) + +![](images/a73eae018fec4325b9451bb1d276c65202e075c957984bf3c4195d87bd966ef3.jpg) +Figure 23: Average marginal effects of listening contexts in language-tagged playlists on listener affective responses, controlling for songs and user demographic variables; standard errors are shown. + +![](images/3e75af5cb5ea76620e56b5b28dd49d0c566a0adbc562364f3546871777f53c61.jpg) + +![](images/694395d93cb8957ff388f76a5979ff2899ece6679a2c037fb2613622cb1530d1.jpg) + +![](images/c210e1761bf26087db47780bd5159b88a9012e5f44d9cf4e951aff464f24e1ca.jpg) + +![](images/7b54c56857d44a307a7db590cbb4e6e6a7b6704ccab68be4610b42ed3f8ab5b9.jpg) + +![](images/230201f626744d146d6ce809358c8c42597d2534d433a4ad6bcb1f25569a9abd.jpg) + +![](images/bdf7d97180a0aac1e0d4ccb2199b44d1815fa30247f962f54ac52aa53d8a0a53.jpg) + +![](images/d842d7652082ee9c738ca25582ae41171acc14f6e35c23ec52cbc1b571e8678b.jpg) + +![](images/7f685c0d1e0c822464aa86b63173fc7a4ddfa08167df7d66bf12131f21bee49a.jpg) + +![](images/b05286a539fd154f69f80537053302bab5c3460053736ca7094b52c7fb786e71.jpg) + +![](images/c8c508414beae80f4ae16f30fcc03aa84a543323ca562638172d39a51bac0b73.jpg) + +![](images/860cbe399110fc8987f58e32caf78d54ff1b7f71ed1448744325667914e6af21.jpg) +Figure 24: Average treatment effects of listener gender on response valence and arousal relative to musical features. A positive ATE here indicates a larger percent increase in valence or arousal for men, and a negative ATE here indicates a larger percent increase in valence or arousal for women. Standard errors are shown; valence in red, arousal in blue. + +![](images/cf35265186e4891413bda5e03c9994b0125540a57faeb29d3344d8681583a424.jpg) + +![](images/d40c798826f6aeaadc2f286b823d9d0d99cd3e00db28c5e0144344913cc053e2.jpg) + +![](images/63bc36782bd6dd178444718214d2e3ae12c7df5578981b02c5c9cdb14d2592e3.jpg) +Figure 25: Average treatment effects of listener gender on response valence and arousal relative to lyrical features on LIWC affective processes. Observations show that men are more positively affected by greater posemo use, while women are more negatively affected by greater negemo use. With the intent to reduce noise at the extremities, x-axis limits are capped at their $95\%$ quantile values. Standard errors are shown; valence in red, arousal in blue. + +![](images/822179a400bfb4e0e90832c405a00539cac8a6cd416f6d97827c5dcdc1ace7bf.jpg) + +![](images/0998fb83260fff4e69a1c40b8290d74c5a8da37cf2508bf0bcab1121150f1da6.jpg) + +![](images/8981cea4752290419cb232c4add92cbdf1eaa3c6abb4e8f2c505c64c39f38630.jpg) + +![](images/10443a34db4bed7c091b2b6762e2ccba8c5a90e27a2a2f0f285d1dbb66a7d942.jpg) + +![](images/1f448e4ebf5d0ff8298a450f054bf252716e2c8e4f5c6704a45b7b8739e28168.jpg) + +![](images/cdc752741ec0bb85c0e9c68ed89a9cf70a79da67fc2e569c27ce44069b7799b5.jpg) + +![](images/3a1020bc7bbbd4d1ea0117cb2c1abce79a2096caa6e95877e39c38004a2de980.jpg) + +![](images/bf3bbe144640c16953759d2c49bff3604924c87d2be62c777d05be1b474cd80e.jpg) + +![](images/2a3f6500c11fcd6b4c50c1f9962f0c8001bb768b100cded1498b10b5cedd64fb.jpg) + +![](images/aeee70cca3bedf0cbc2daf98f6415e4375365ff608b98525c60005d8b2a53492.jpg) + +![](images/d058734102cc1068db4999389beeca5b2d998e1597d5b77ac0813d54c10f07d6.jpg) + +![](images/93af314d15078eba387ce9b281385b06e4e196f5959b0476518dec08ccc45f48.jpg) +Figure 26: Average treatment effects of listener age on response valence and arousal relative to musical features. A positive ATE here indicates a larger percent increase in valence or arousal for those born in the (b.i.t.) 2000s, and a negative ATE here indicates a larger percent increase in valence or arousal for those b.i.t. 1990s. Standard errors are shown; valence in red, arousal in blue. + +![](images/dd98366ae71111960ce0f53ada4e8da8b76582678f139ed08c4b1927c20fec7f.jpg) + +![](images/d49d28524964b4ee90af19e0daa5ffdd75d6fe3e7791315968fdaaa10aeeb2ee.jpg) \ No newline at end of file diff --git a/affectiveidiosyncraticresponsestomusic/images.zip b/affectiveidiosyncraticresponsestomusic/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..617258faceac0a0a4caa4e37f7119ee9473763df --- /dev/null +++ b/affectiveidiosyncraticresponsestomusic/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e881f89a65b0ad0dbc3a9a73160310a08bb6627a633578334ff73eb60ba7d2d8 +size 2806298 diff --git a/affectiveidiosyncraticresponsestomusic/layout.json b/affectiveidiosyncraticresponsestomusic/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..0950712a7098b323a876d8f3fa4f0019bde92965 --- /dev/null +++ b/affectiveidiosyncraticresponsestomusic/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2a0cdbb1563aa9cd8bc6abcfde6e2dae0756c579c5230656d66768fb6858bdeb +size 920671 diff --git a/affectiveknowledgeenhancedmultiplegraphfusionnetworksforaspectbasedsentimentanalysis/1f4edf19-bdc1-4990-9baa-d6f64f562bd7_content_list.json b/affectiveknowledgeenhancedmultiplegraphfusionnetworksforaspectbasedsentimentanalysis/1f4edf19-bdc1-4990-9baa-d6f64f562bd7_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..6f0a0f14ac9fc66441e6db057824252a5e7dd906 --- /dev/null +++ b/affectiveknowledgeenhancedmultiplegraphfusionnetworksforaspectbasedsentimentanalysis/1f4edf19-bdc1-4990-9baa-d6f64f562bd7_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1192bef1fabb9c9f11725c50ac1bf0164f4c7ac240f4a48251896eef8a8af047 +size 79652 diff --git a/affectiveknowledgeenhancedmultiplegraphfusionnetworksforaspectbasedsentimentanalysis/1f4edf19-bdc1-4990-9baa-d6f64f562bd7_model.json b/affectiveknowledgeenhancedmultiplegraphfusionnetworksforaspectbasedsentimentanalysis/1f4edf19-bdc1-4990-9baa-d6f64f562bd7_model.json new file mode 100644 index 0000000000000000000000000000000000000000..c64229306f34d06527b47eae7fb79f7e346e7ba3 --- /dev/null +++ b/affectiveknowledgeenhancedmultiplegraphfusionnetworksforaspectbasedsentimentanalysis/1f4edf19-bdc1-4990-9baa-d6f64f562bd7_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:98e40a7d187cf53aef546bc5b142d3082d4e73e30e93d5394027bc3d3b8e1df7 +size 100108 diff --git a/affectiveknowledgeenhancedmultiplegraphfusionnetworksforaspectbasedsentimentanalysis/1f4edf19-bdc1-4990-9baa-d6f64f562bd7_origin.pdf b/affectiveknowledgeenhancedmultiplegraphfusionnetworksforaspectbasedsentimentanalysis/1f4edf19-bdc1-4990-9baa-d6f64f562bd7_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..3d48ef300ff92795a2a6e6e89714797159f3b219 --- /dev/null +++ b/affectiveknowledgeenhancedmultiplegraphfusionnetworksforaspectbasedsentimentanalysis/1f4edf19-bdc1-4990-9baa-d6f64f562bd7_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:431fc552c939cb182c542539c4976e974a53af656f229a3c6444180900313180 +size 973685 diff --git a/affectiveknowledgeenhancedmultiplegraphfusionnetworksforaspectbasedsentimentanalysis/full.md b/affectiveknowledgeenhancedmultiplegraphfusionnetworksforaspectbasedsentimentanalysis/full.md new file mode 100644 index 0000000000000000000000000000000000000000..1697703c59e887166455b2551e23422795e00296 --- /dev/null +++ b/affectiveknowledgeenhancedmultiplegraphfusionnetworksforaspectbasedsentimentanalysis/full.md @@ -0,0 +1,392 @@ +# Affective Knowledge Enhanced Multiple-Graph Fusion Networks for Aspect-based Sentiment Analysis + +Siyu Tang $^{1*}$ , Heyan Chai $^{1*}$ , Ziyi Yao $^{1}$ , Ye Ding $^{2\dagger}$ , Cuiyun Gao $^{1}$ , Binxing Fang $^{1,3}$ , Qing Liao $^{1,3\dagger}$ + +1 Harbin Institute of Technology, Shenzhen, China + +$^{2}$ Dongguan University of Technology, China + +3 Peng Cheng Laboratory, Shenzhen, China + +{tangsiyu999, chaiyan, yaoziyi}@stu.hit.edu.cn, + +dingye@gut.edu.cn, gaocuiyun@hit.edu.cn, fangbx@cae.cn, liaqing@hit.edu.cn + +# Abstract + +Aspect-based sentiment analysis aims to identify sentiment polarity of social media users toward different aspects. Most recent methods adopt the aspect-centric latent tree to connect aspects and their corresponding opinion words, thinking that would facilitate establishing the relationship between aspects and opinion words. However, these methods ignore the roles of syntax dependency relation labels and affective semantic information in determining the sentiment polarity, resulting in the wrong prediction. In this paper, we propose a novel multi-graph fusion network (MGFN) based on latent graph to leverage the richer syntax dependency relation label information and affective semantic information of words. Specifically, we construct a novel syntax-aware latent graph (SaLG) to fully leverage the syntax dependency relation label information to facilitate the learning of sentiment representations. Subsequently, a multi-graph fusion module is proposed to fuse semantic information of surrounding contexts of aspects adaptively. Furthermore, we design an affective refinement strategy to guide the MGFN to capture significant affective clues. Extensive experiments on three datasets demonstrate that our MGFN model outperforms all state-of-the-art methods and verify the effectiveness of our model. + +# 1 Introduction + +Sentiment analysis has been a popular research subject in natural language processing. Aspect-based sentiment analysis (ABSA) (Birjali et al., 2021) is a fine-grained sentiment analysis task. For example, given a sentence "The menu is limited but the dishes are excellent", there are two aspects mentioned in the sentence and the sentiment polarity of aspects "menu" and "dishes" are negative and positive, respectively. Generally, ABSA task is formulated as predicting the polarity of a given + +![](images/4c065192c117b4c260eaa1061821168b4c0042119b1f57aef9941b3007fd6d2c.jpg) +(a) The dependency parse tree from spaCy. + +![](images/956662264c0da22ab80a191adb95829435acea352fc598de05921f2311fef48e.jpg) +(b) The dependency tree derived by ACLT. +Figure 1: (a) Two similar sentences with aspect "Amy", each with its own dependency tree. (b) An example, the numbers in arcs denote the weight of edge between aspect word and its contextual words, derived from ACLT (Zhou et al., 2021). + +sentence-aspect pair. The main challenge of ABSA is to precisely capture the relationship between the aspect and its corresponding opinion expressions. + +Many existing graph-based methods(Sun et al., 2019a; Zhao et al., 2020; Wang et al., 2020; Li et al., 2021b) have been devoted to obtaining promising performance of ABSA task by constructing graph neural networks (GNNs) over dependency trees. They generally rely on the off-the-shelf dependency parsers to generate the static syntactic relationship between words in a sentence, which is insufficient to adaptively search for the affective clues of aspects from the contexts. Recent efforts (Chen et al., 2020; Zhou et al., 2021) show that latent graph derived from dynamic latent trees can adaptively capture the relationship between words in a sentence, leading to better performance in ABSA. + +Despite promising progress made by latent graph based methods, they still suffer from two potential limitations: (1) They ignore the richer syntactic information contained in syntax dependency relation labels1 (e.g., nsubj and dobj in Figure 1), leading + +models to make wrong predictions. We show examples in Figure 1 (a) where these two sentences are very similar and have the same aspect "Amy". Noting that aspect "Amy" presents the opposite sentiment polarities in these two sentences. The main reason of wrong prediction is that the same aspects may signal different sentiment polarities when they have different syntax dependency relation labels (nsubj and obj with red color) with opinion words. Therefore, it is important to model the syntax dependency relations between words and fuse them into the latent graph to improve the performance of ABSA task. (2) They pay more attention to neighbor words of aspects, bringing extra difficulty in capturing the interaction between aspects and their corresponding long-distance opinion words. To illustrate this limitation, we give an example in Figure 1 (b) where attention scores of every word are derived from existing state-of-the-art latent graph method, ACLT (Zhou et al., 2021). Noting that the attention value between aspect "chicken" and its corresponding opinion word "appalled" is 0.14 which is much lower than that between the aspect and its neighbor words (e.g. 0.26 for "at", 0.17 for "the", etc.). This implies that the existing latent graph overly focuses excessively on the neighbor words of aspects, while ignoring affective semantic information of words. Such a limitation may prevent the model from accurately capturing the interaction between aspects and their corresponding opinion words, thus degrading performance. + +To address the aforementioned two limitations, in this paper, we propose a novel multi-graph fusion network (MGFN) based on latent graph to leverage the richer syntax dependency relation label information and affective semantic information of words. Specifically, we construct a novel syntax-aware latent graph (SaLG) by integrating syntax dependency relation label information to facilitate the learning of sentiment representations in ABSA task. Subsequently, we design a multi-graph fusion module to fuse the information of the syntax-aware latent graph and the semantic graph (SeG), so that the SaLG can leverage the semantic information to capture significant sentiment features. In addition, we design a novel affective refinement strategy to guide the model to determine the significant affective clues from surrounding contexts, which can effectively enable the model to capture the interaction between aspect words and long-distance + +opinion words. + +Our contributions are highlighted as follows: + +- We have come up with a kind of syntax-aware latent graph (SaLG) by leveraging the syntax dependency relation label information to facilitate the learning of sentiment representation. +- A novel multi-graph fusion network (MGFN) is proposed by integrating the semantic information learned from semantic graph (SeG) into SaLG to capture more accurate sentiment representations. +- We also propose an affective refinement strategy to guide MGFN model to pay more attention to opinion expressions of aspect words. +- Experimental results illustrate that our MGFN model outperforms the state-of-the-art methods on SemEval 2014 and Twitter datasets. + +# 2 Methodology + +In this section, we elaborate on the details of our proposed model. The overall framework of MGFN is shown in Figure 2. It contains four components: 1) Text Encoding Module encodes the contextualized representations of input sentence. 2) Graph Construction Module constructs a novel syntax-aware latent graph (SaLG) and a semantic graph (SeG), respectively. 3) Multi-Graph Fusion Module adaptively integrates semantic information from SeG into SaLG via an adaptive fusion gate. 4) Affective Refinement Module introduces a novel affective refinement strategy to encourage MGFN to pay more attention to the opinion expressions of aspect words. + +# 2.1 Text Encoding Module + +Given a $n$ -word sentence $s = \{w_1, w_2, \dots, w_{\tau + 1}, \dots, w_{\tau + m}, \dots, w_n\}$ with the aspect $a = \{w_{\tau + 1}, \dots, w_{\tau + m}\}$ , we utilize the pre-trained language model BERT (Devlin et al., 2019) to obtain contextualized representation for each word. For the BERT encoder, we first construct a BERT-based sentence-aspect pair $\mathbf{x} = ([\mathrm{CLS}] s [\mathrm{SEP}] a [\mathrm{SEP}])$ as input. The output contextualized representation $\pmb{H} = \mathrm{BERT}(\mathbf{x})$ . $\pmb{H} = [h_1, h_2, \dots, h_n] \in \mathbb{R}^{n \times d}$ , where $d$ denotes the dimensionality of BERT embeddings and $\pmb{h}_i$ is the contextual representation of the $i$ -th word. + +![](images/45e70a96c9cd208c040d8958c67953ae102a0b3a55a84844c93757921b2d6e60.jpg) +Figure 2: The overall architecture of MGFN, which is composed primarily of four modules. + +# 2.2 Graph Construction Module + +# 2.2.1 Syntax-aware latent graph + +In order to capture syntax dependency relation label information, we construct a novel syntax-aware latent graph (SaLG) by implicitly labeling the edges with different dependency relations. + +We construct dependency relation matrix $\mathbf{R} \in \mathbb{R}^{n \times n}$ from off-the-shelf dependency parser to utilize the dependency relation label information. Each $r_{ij} \in \mathbf{R}$ represents the syntax dependency relation label between $i$ -th and $j$ -th words: + +$$ +\boldsymbol {r} _ {i j} = \left\{ \begin{array}{l l} d e p r e l & \text {i f} \operatorname {l i n k} (i, j) = 1 \\ 0 & \text {o t h e r w i s e} \end{array} \right. \tag {1} +$$ + +where $\operatorname{link}(i,j)$ shows that $i$ -th and $j$ -th words have a dependence link, and deprel is dependency relation label (e.g., nsubj, dobj). A new dependency relation dictionary $\mathbf{V}^r$ is built based on the frequency of deprel in corpus to encode dependency relations: + +$$ +\boldsymbol {V} ^ {r} = \left\{\operatorname {d e p r e l}: \operatorname {t o I d} \left(p \left(\operatorname {d e p r e l}\right)\right) \right\} \tag {2} +$$ + +$$ +p (d e p r e l) = \frac {N (d e p r e l)}{N} \tag {3} +$$ + +where $\operatorname{toId}(\cdot)$ can map each kind of deprel into a corresponding non-repeating integer ID according to its frequency calculated by $p$ . $N(\text{deprel})$ is the number of deprel, $N$ is the total number of all kinds of deprel. By using the constructed $\mathbf{V}^r$ as lookup table, each relation $r_{ij}$ can be embedded into high-dimensional word embedding vector $e_{ij} \in \mathbb{R}^{1 \times d_e}$ . Subsequently, syntactic relation type-aware matrix $\tilde{\mathbf{A}} \in \mathbb{R}^{n \times n}$ is defined as: + +$$ +\tilde {\boldsymbol {A}} _ {i j} = \operatorname {s o f t m a x} \left(\boldsymbol {W} ^ {a} \boldsymbol {e} _ {i j} + \boldsymbol {b} ^ {a}\right) \tag {4} +$$ + +Utilizing $\tilde{A}$ as initial edge weight matrix, the syntax-aware latent tree with $n$ nodes is derived by tree inducer (Zhou et al., 2021), where each node is the word of input sentence. Firstly, we define the variant of Laplacian matrix $\widehat{L}$ of the syntax-aware latent tree which further accounts for the dependencies headed by the root symbol: + +$$ +\widehat {\boldsymbol {L}} _ {i j} = \left\{ \begin{array}{l l} \boldsymbol {\psi} _ {i} + \sum_ {i ^ {\prime} = 1} ^ {n} \tilde {\boldsymbol {A}} _ {i ^ {\prime} j} & \text {i f} i = j \\ - \tilde {\boldsymbol {A}} _ {i j} & \text {o t h e r w i s e} \end{array} \right. \tag {5} +$$ + +where $\psi_{i} = \exp (W^{r}\pmb{h}_{i} + \pmb{b}^{r})$ is the score of $i$ -th node to be selected as structure root. $\widehat{\pmb{L}}$ can be used to simplify calculation of the sum of weights. Subsequently, the marginal probability $A_{ij}^{SALG}$ of the syntax-aware latent tree is calculated by $\widehat{L}_{ij}$ : + +$$ +\boldsymbol {A} _ {i j} ^ {S a L G} = \left\{ \begin{array}{l l} \tilde {\boldsymbol {A}} _ {i j} [ \widehat {\boldsymbol {L}} ^ {- 1} ] _ {j j} & i = 1 \text {a n d} j \neq 1 \\ \tilde {\boldsymbol {A}} _ {i j} [ \widehat {\boldsymbol {L}} ^ {- 1} ] _ {j i} & i \neq 1 \text {a n d} j = 1 \\ \tilde {\boldsymbol {A}} _ {i j} [ \widehat {\boldsymbol {L}} ^ {- 1} ] _ {j j} & i \neq 1 \text {a n d} j \neq 1 \\ - \tilde {\boldsymbol {A}} _ {i j} [ \widehat {\boldsymbol {L}} ^ {- 1} ] _ {j i} & i = 1 \text {a n d} j = 1 \end{array} \right. \tag {6} +$$ + +where $A^{SaLG}$ can be seen as the weighted adjacency matrix of SaLG transformed from syntax-aware latent tree. + +We adopt a root constraint strategy (Zhou et al., 2021) to keep SaLG be rooted at aspect: + +$$ +\mathcal {L} _ {r} = - \sum_ {i = 1} ^ {N} p _ {i} ^ {r} \log \widehat {\boldsymbol {P}} _ {i} ^ {r} + \left(1 - p _ {i} ^ {r}\right) \log \left(1 - \widehat {\boldsymbol {P}} _ {i} ^ {r}\right) \tag {7} +$$ + +where, $\widehat{P}_i^r = \psi_i[\widehat{\pmb{L}}^{-1}]_{i1}$ is the probability of $i$ -word headed by the root of latent structure. $p_i^r \in \{0, 1\}$ represents whether $i$ -th word is the aspect. + +# 2.2.2 Semantic Graph + +The semantic graph (SeG) offers semantic information. The adjacency matrix $\mathbf{A}^{SeG} \in \mathbb{R}^{n \times n}$ of SeG is obtained via a multi-head self-attention mechanism for calculating the semantic similarity: + +$$ +\boldsymbol {A} ^ {S e G} = \frac {\sum_ {k = 1} ^ {K} \boldsymbol {A} ^ {S e G , k}}{K} \tag {8} +$$ + +$$ +\boldsymbol {A} ^ {S e G, k} = \operatorname {s o f t m a x} \left(\frac {\boldsymbol {H} \boldsymbol {W} ^ {Q} \times (\boldsymbol {H} \boldsymbol {W} ^ {K}) ^ {T}}{\sqrt {D _ {H}}}\right) (9) +$$ + +where $K$ is the number of attention heads. $A^{sem,k}$ is attention scores matrix of $k$ -th head. $\sqrt{D_H}$ is the dimensionality of contextual representation $\pmb{H}$ . + +# 2.3 Multi-Graph Fusion Module + +Since SaLG fails to fully focus on the opinion expressions, we design a multi-graph fusion module with adaptive fusion gate to offer semantic information guide, adaptively fusing semantic information from SeG into SaLG during iterative interaction. + +The hidden state representation of SaLG and SeG at $l$ -th layer is updated through stacked common graph convolutional (C-GCN) blocks: + +$$ +\boldsymbol {H} _ {l} ^ {S a L G} = \sigma \left(\boldsymbol {A} ^ {S a L G} \boldsymbol {W} _ {l} ^ {c} \boldsymbol {H} _ {l - 1} ^ {S a L G} + \boldsymbol {b} _ {l} ^ {c}\right) \tag {10} +$$ + +$$ +\boldsymbol {H} _ {l} ^ {S e G} = \sigma \left(\boldsymbol {A} ^ {S e G} \boldsymbol {W} _ {l} ^ {c} \boldsymbol {H} _ {l - 1} ^ {S e G} + \boldsymbol {b} _ {l} ^ {c}\right) \tag {11} +$$ + +where $H_{l}^{SaLG}$ and $H_{l}^{SeG}$ are SaLG and SeG representations at the $l$ -th layer. $H_{l-1}^{SaLG}$ and $H_{l-1}^{SeG}$ are inputs of preceding layer of the C-GCN block and $H$ is the initial input of the first block. $W_{l}^{c}$ and $b_{l}^{c}$ are the shared trainable parameters. Meanwhile, an adaptive fusion gate is adopted to adaptively integrate $H_{l}^{SaLG}$ and $H_{l}^{SeG}$ for each node: + +$$ +\boldsymbol {H} _ {l} ^ {S a L G} = \operatorname {R e L U} \left(\boldsymbol {W} _ {l} \left(\alpha \boldsymbol {H} _ {l} ^ {S a L G} + \beta \boldsymbol {H} _ {l} ^ {S e G}\right)\right) \tag {12} +$$ + +$$ +\alpha = \rho \cdot \sigma \left(\mathrm {g} \left(\boldsymbol {H} _ {l} ^ {S a L G}\right)\right) \tag {13} +$$ + +$$ +\beta = 1 - \alpha \tag {14} +$$ + +where $\alpha$ and $\beta$ are the dynamic fusion proportions. $\mathrm{g}(\cdot)$ is a self-gating function (Bo et al., 2021) with a shared convolutional kernel. $\rho \in [0,1]$ is the hyper-parameter of prior knowledge. $l\in [1,L]$ . + +We use control factor $\omega = \sigma (\mathrm{g}(\pmb{H}_{l - 1}))$ to retain the information of preceding layer of C-GCN block to relieve the over-smoothing problem: + +$$ +\boldsymbol {H} _ {l} ^ {S a L G} = \omega \cdot \boldsymbol {H} _ {l} ^ {S a L G} + (1 - \omega) \cdot \boldsymbol {H} _ {l - 1} ^ {S a L G} \tag {15} +$$ + +Capture significant sentiment feature. The latent-specific attention mechanism is utilized to capture significant sentiment features of SaLG: + +$$ +\varepsilon = \operatorname {s o f t m a x} \left(\boldsymbol {H} _ {L} ^ {S a L G} \boldsymbol {H} _ {L} ^ {S e G ^ {\top}}\right) \tag {16} +$$ + +where $\varepsilon$ is semantic-aware latent weight based on the output representation of the last C-GCN block. Then we can obtain a more richer sentiment representations $z = \varepsilon H_{L}^{SeG}$ . To make feature aspect-oriented, a mask mechanism is utilized to get aspect-oriented sentiment feature representation $z_{i}^{A} = m_{i}z_{i}$ : + +$$ +m _ {i} = \left\{ \begin{array}{l l} 0, & 1 \leq i < \tau + 1, \tau + m < t \leq n \\ 1, & \tau + 1 \leq t \leq \tau + m \end{array} \right. \tag {17} +$$ + +where $\tau + 1 \leq t \leq \tau + m$ denotes the aspect words. + +# 2.4 Affective Refinement Module + +In order to guide MGFN to determine the significant affective clues from surrounding contexts, we propose a novel affective refinement strategy to better correlate the aspect and opinion words. + +We use SenticNet6 (Cambria et al., 2020) to get the affective score $\eta_{i}$ for each word of input sentence in order to obtain a lexicon vector $lex\in$ $\mathbb{R}^{n\times 1} = [\eta_1,\eta_2,\dots ,\eta_n]$ , where $\eta_{i} = 0$ if $i$ -th word is not in SenticNet6. Meanwhile, the hidden state representation $H_{l}^{SalG}$ at $l$ -th layer is mapped into the intermediate vector $\gamma^{SalG}\in \mathbb{R}^{n\times 1} =$ $[\gamma_1,\gamma_2,\dots ,\gamma_n]$ , where each low-dimensional node representation $\gamma_{i}$ is given by: + +$$ +\gamma_ {i} = \boldsymbol {W} ^ {S a L G} \boldsymbol {H} _ {l, i} ^ {S a L G} + \boldsymbol {b} ^ {S a L G} \tag {18} +$$ + +Through minimizing the loss function $\mathcal{L}_s$ of affective refinement strategy, ideally, our model will pay more attention to the opinion expressions of aspect words: + +$$ +\mathcal {L} _ {s} = \left(\boldsymbol {\gamma} ^ {S a L G} - \boldsymbol {l e x}\right) ^ {2} \tag {19} +$$ + +# 2.5 Model Training + +Softmax classifier. To deal with multi-word aspect, we apply average pooling on aspect nodes of $\mathbf{z}^A$ , and calculate the sentiment probability distribution $\hat{y}_{(s,a)}$ by a linear layer with softmax function: + +$$ +\hat {y} _ {(s, a)} = \operatorname {s o f t m a x} \left(\boldsymbol {W} ^ {p} \operatorname {A v e P o o l i n g} \left(\boldsymbol {z} ^ {A}\right) + \boldsymbol {b} ^ {p}\right) \tag {20} +$$ + +where $(s,a)$ is a sentence-aspect pair. + +Our training goal is to minimize the following overall objective function: + +$$ +\mathcal {L} (\Theta) = \lambda \mathcal {L} _ {C} + \mu_ {1} \mathcal {L} _ {r} + \mu_ {2} \mathcal {L} _ {s} \tag {21} +$$ + +where $\Theta$ represents all trainable parameters of model. $\lambda$ , $\mu_{1}$ and $\mu_{2}$ are the hyper-parameters. The cross-entropy loss $L_{C}$ for main classification task is defined as follows: + +$$ +\mathcal {L} _ {C} = \sum_ {(s, a) \in \mathcal {D}} y _ {(s, a)} \log \hat {y} _ {(s, a)} \tag {22} +$$ + +where $\mathcal{D}$ contains all sentence-aspect pairs and $y_{(s,a)}$ is the real distribution of sentiment. + +# 3 Experimental Setup + +# 3.1 Datasets + +Our model is evaluated the performance on three benchmark datasets. The Laptop (LAP14) and Restaurant (REST14) datasets are made public from SemEval2014 ABSA challenge (Pontiki et al., 2014). Furthermore, the Twitter dataset is a collection of tweets from (Dong et al., 2014). All three datasets have three sentiment polarities: positive, negative and neutral. Each dataset provides aspect terms and corresponding polarities. Detailed statistics of the datasets can be found in Table 1. + +# 3.2 Implementation Details + +The Stanford parser $^2$ is utilized to get syntactic dependency relations. We employ the uncased english version of the BERT model $^3$ in PyTorch. The dropout rate is 0.3. The number of layers of graph convolutional block is 2. Our model is trained with a batch size of 16 and uses Adam optimizer with a learning rate of $2e - 5$ . The coefficients $\mu_{1}$ and $\mu_{2}$ are set to (0.04, 0.04), (0.05, 0.06) and (0.06, 0.08) for three datasets. The hyper-parameter $\lambda$ is 0.5, and $\rho$ is 0.2. We repeat each experiment three times and average the results. We use accuracy (Acc.) and macro-f1 (F1.) as the main evaluation metrics. + +# 4 Experimental Results + +# 4.1 Baselines + +We compare our MGFN with state-of-the-art baselines which are described as follows: + +- CDT (Sun et al., 2019b) used GCNs to learn aspect representation over a dependency tree. + +
Dataset#Positive#Negative#Neutral
TrainTestTrainTestTrainTest
LAP14976337851128455167
REST142164727807196637196
TWITTER150717215281693016336
+ +Table 1: Statistics of three datasets. + +- BERT-SRC (Devlin et al., 2019) is the vanilla BERT model for classification. +- R-GAT (Wang et al., 2020) designed a new aspect-oriented dependency tree and encoded the new tree by relational GAT. +- KumaGCN (Chen et al., 2020) combined external dependency parse graph and latent graph to generate task-specific representation. +- DGEDT (Tang et al., 2020) proposed a dependency graph enhanced dual-transformer network. +- BATAE-GRU (Wang and Wang, 2021) used an attention-based model to relate the aspect. +- DualGCN (Li et al., 2021b) proposed a dualgraph GCN to address disadvantages of attention and dependency tree based methods. +- ACLT (Zhou et al., 2021) designed an aspect-centric latent tree to shorten the distance between aspects and opinion words. +- BERT4GCN (Xiao et al., 2021) utilized outputs from intermediate layers of BERT and positional information to augment GCN. +- CPA-SA (Huang et al., 2022) designed two asymmetrical contextual position weight functions to adjust the weight of aspect. +- IMA (Wang et al., 2022) combined interaction matrix and global attention mechanism to measure relationships between words. +- HGCN (Xu et al., 2022) synthesize information from constituency tree and dependency tree to enrich the representation. + +Baselines and MGFN are all BERT-based. We present the reported results of those baselines. However, for CDT method, we implement it under BERT setting using its open implementation. The source code and BERT settings of kumaGCN are not provided, so we use the results reported by ACLT in order to be fair for other models. + +# 4.2 Overall Performance Comparison + +Table 2 shows main experimental results of the baselines and our model. We can observe that: + +
ModelLAP14REST14Twitter
Acc.(%)F1.(%)Acc.(%)F1.(%)Acc.(%)F1.(%)
BERT-SRC (Devlin et al., 2019)78.9975.0384.4676.9873.5572.14
CDT (Sun et al., 2019b)79.7075.6186.3680.1677.5076.54
R-GAT (Wang et al., 2020)78.2174.0786.6081.3576.1574.88
DGEDT (Tang et al., 2020)79.8075.6086.3080.0077.9075.40
KumaGCN (Chen et al., 2020)79.5775.6184.9177.2274.3373.42
BERT4GCN (Xiao et al., 2021)77.4973.0184.7577.1174.7373.76
BATAE-GRU (Wang and Wang, 2021)78.5974.7884.1176.0974.3472.76
ACLT (Zhou et al., 2021)79.6875.8385.7178.4475.4874.51
DualGCN (Li et al., 2021b)81.8078.1087.1381.1677.4076.02
CPA-SA (Huang et al., 2022)75.1871.582.6473.38--
IMA (Wang et al., 2022)77.4473.4882.8173.66--
HGCN (Xu et al., 2022)79.59-86.45---
Our MGFN81.8378.2687.3182.3778.2977.27
+ +Table 2: Main experimental results of aspect-based sentiment classification on three public datasets. The best results are in bold, and the second-best results are underlined. + +1) Our MGFN model achieves the state-of-the-art performances over all baselines on three datasets. Compared to the state-of-the-art graph-based model DualGCN, our model makes especially $1.21\%$ and $1.25\%$ in terms of F1 improvements on REST14 and Twitter respectively. Our MGFN slightly outperforms DualGCN $(0.16\%)$ on LAP14 dataset. 2) The state-of-the-art latent graph based model ACLT does not outperform DualGCN, indicating that latent graph needs to be further improved. 3) The dependency parse tree based models (e.g., CDT, and DualGCN) usually outperform syntax information free models (e.g., BERT-SRC, CPA-SA), which means syntactic dependency relation information is effective. Therefore, our MGFN proposes a novel SaLG to leverage richer syntax dependency relations. 4) The KumaGCN combines latent graph and syntactic dependency graph, but has still poor performance. In contrast, our MGFN leverages affective semantic information of words to improve the experimental results successfully. + +# 4.3 Ablation Study + +We conduct an ablation study by removing modules and loss terms, shown in Table 3. We remove the syntax dependency relation label (w/o Syn. Information), which leads to performance degradation. MGFN w/o adaptive fusion gate is that we do not fuse SeG into SaLG during iterations. We observe + +![](images/5a8880e621ea7ae565d28f13a3f4ee16faa73c8f0ebfec00142676c6c4940265.jpg) +(a) w/o syntax dependency relation latent tree + +![](images/140ce3a9f3b52ed65355b2ffbdc16879e722036099192b92c6e3b04ee4cbf592.jpg) +(b) syntax-aware latent tree +Figure 3: A review from REST14 dataset to illustrate different trees. The aspect words are in blue. + +that both w/o SaLG, w/o SeG and w/o adaptive fusion gate result in performance drops, showing that adaptively integrating semantic information into SaLG improves performance of MGFN as far as possible. MGFN w/o $\mathcal{L}_r\& \mathcal{L}_s$ is we remove both root constraint strategy and affective refinement strategy, MGFN w/o $\mathcal{L}_r$ or $\mathcal{L}_s$ is we remove one of these strategies, both leading to performance drops. + +# 5 Discuss and Analysis + +# 5.1 Effect of Syntax-aware Latent Graph + +To investigate the effect of SaLG, we utilize the latent tree w/o syntax dependency relation informa + +
ModelLAP14REST14Twitter
Acc.(%)F1.(%)Acc.(%)F1.(%)Acc.(%)F1.(%)
Our MGFN81.8378.2687.3182.3778.2977.27
w/o Syn. Informaiton81.0676.5886.8681.7377.5576.06
w/o SaLG80.2276.2386.3279.9276.2575.32
w/o SeG80.3876.4186.6080.3276.6375.92
w/o Adaptive Fusion Gate80.5376.6986.8781.1576.8175.98
w/o Lr & Ls80.2276.2386.6879.8377.475.87
w/o Lr81.1778.0287.0280.677.5576.58
w/o Ls80.3876.3886.7080.1177.5175.99
+ +Table 3: Ablation study experimental results + +![](images/a9be7cf8a1aa07f2f551638c7290bcd3acf62d7efd8d5bee002b21dd65f4c51a.jpg) +Figure 4: Attention visualization of learned latent weights by MGFN and MGFN w/o $\mathcal{L}_s$ models. "design" is the aspect word. + +tion to compare with our novel syntax-aware latent tree, shown in Figure 3. Specifically, in Figure 3 (a), the edge weight from aspect "design" to opinion word "good" is only 0.12, while the weights to neighbour words are much higher (e.g. 0.15 for "The", and 0.21 for "atmosphere", etc.). However, in Figure 3 (b), the weight between "design" and "good" increases to 0.15, slightly higher than neighbour words. Utilizing syntactic dependency relation label information, aspect pays more attention to opinion word "good" in our SaLG. + +# 5.2 Impact of Affective Refinement Strategy + +In order to verify the effectiveness of the affective refinement strategy, we visualize the attention weight $\varepsilon$ in Eq. (16) of the example review. In Figure 4, we observe that the MGFN w/o $\mathcal{L}_s$ model assigns higher attention on "The", "and" and "atmosphere" incorrectly when $\mathcal{L}_s$ is not utilized. In comparison, for our MGFN model, the aspect "design" can assign the highest attention on "good" obviously, since opinion word "good" contains the highest sentiment score in lexicon vector of example review. + +![](images/62d070d56af23f1e1d8cc76b80401384f1ad96ed2653aa46d05006a00e25482c.jpg) +(a) + +![](images/8f9c943fdd7947623da6c070bb030a57a3f0fae250210f46088ca00655d3fc81.jpg) +(b) + +![](images/2178b522e20add84da0f739082397148c38da0c84b346beb9e9be6defca83ecf.jpg) +Figure 5: The impact of different $\lambda$ . +(a) +Figure 6: The impact of the number of common graph convolutional block. + +![](images/da999dba327ab23378dc802a84c00f111099202124045803251f9a72136811e0.jpg) +(b) + +# 5.3 Hype-parameter Analysis + +To investigate the effect of the hype-parameter, we vary the $\lambda$ from 0.1 to 0.9, shown in Figure 5. The hyper-parameter $\lambda$ represents the proportion of main classification task in total objective function. From Figure 5, the performance reaches its highest when $\lambda$ equals to 0.5. If $\lambda$ is less than 0.5, the main task cannot be trained fully. However, if $\lambda$ is more than 0.5, the proposed constraint strategies fail to work well. Therefore, it is important to set an appropriate $\lambda$ to balance the performance of main classification task and two constraint strategies. + +
SentenceACLTMGFN w/o \(L_s\)MGFN
The [menu]neg is limited but the [dishes]pos are excellent.(neg✓,pos✓)(neg✓,pos✓)(neg✓,pos✓)
For my user experience, the [speed]pos is better than the [battery life]neg.(pos✓,posX)(pos✓,neg✓)(pos✓,neg✓)
I had great interest in this restaurant due to its [atmosphere]pos, but the [service]neg was disappointing.(negX, neg✓)(neuX, neg✓)(pos✓,neg✓)
+ +Table 4: Case study experimental results of three different models + +# 5.4 Impact of Number of C-GCN Blocks + +To investigate the impact of number $L$ of C-GCN blocks, we vary the $L$ from 1 to 9, shown in Figure 6. Our model with 2 C-GCN blocks achieves the best performance. When $L$ is less than 2, our MGFN is not enough to fully integrate semantic information from SeG into SaLG. When $L$ is excessive, the performance of our model decreases due to vanishing gradient and over-smoothing. However, the performance of MGFN does not degrade sharply because of our control factor $\omega$ . + +# 5.5 Case Study + +We conduct a case study by classifying a few examples using different models, shown in Table 4. We use boldface in brackets to show aspects of each sentence and subscripts to indicate corresponding golden sentiment polarities. For the first sentence, aspects "menu" and "dishes" are both next to their own opinion words, so all models easily assign correct sentiment polarities. In the second sentence, aspects "speed" and "battery life" are adjacent to opinion expression "better". The ACLT model can not identify the dependency relation type information, which results in wrong prediction of aspect "battery life". Besides, for the third sentence, aspect "atmosphere" is closer to opinion expression "disappointing", which leads to incorrect predictions by ACLT and MGFN w/o $\mathcal{L}_s$ models. While our MGFN includes an affective refinement strategy and can capture the significant affective cue of true opinion expression "great interest". + +# 6 Related Work + +Aspect-based Sentiment analysis: Sentiment analysis is one of the most active research areas in natural language processing (Liao et al., 2021; Tang et al., 2022), and is widely studied in QA system (Ma et al., 2021), stance detection (AlDayel and Magdy, 2021; Hardalov et al., 2021), recommendation system (Aljunid and Huchaiah, 2021; Abbasi-Moud et al., 2021), and event detection (Ma et al., + +2022). Aspect-based Sentiment analysis (ABSA) is first proposed by Hu and Liu (2004) to refine sentiment analysis, which aims to detect fine-grained sentiments towards different aspects. Early efforts on ABSA utilizes attention-based neural models to model semantic interactions (Wang et al., 2016; Chen et al., 2017). Some other efforts (Wang et al., 2016; Nguyen and Nguyen, 2018; Huang et al., 2021) try to explicitly establish the syntactic dependency connections between words. + +Graph neural networks: Recently, Graph neural networks (GNNs) (Huang et al., 2019; Kim et al., 2019) have received growing attention and successfully used in many applications such as action recognition (Zhang et al., 2022), relation extraction (Bastos et al., 2021; Zhang et al., 2021) and scene image generation (Li et al., 2021a). Yao et al. (2019) innovatively utilized graph convolution networks (GCNs) for text classification in natural language process field. For ABSA, Zhang et al. (2019) used GCNs to encode dependency information of syntactic dependency parse tree. Tang et al. (2020) proposed a dependency graph enhanced dual-transformer network(DGEDT) to allow the dependency graph to guide the representation learning of the transformer encoder. Wang et al. (2020) constructed the aspect-oriented dependency trees by which reshaped the ordinary dependency parse tree to root it at aspect using manual rules.Li et al. (2021b) used the probability matrix with all dependency structures of input sentence from off-the-shelf dependency parser to alleviate inaccurate parse problem and integrated syntactic and semantic information. + +More recently, several teams have explored to construct latent graph that can adaptively capture the relation between words of the sentence in an end-to-end fashion. Chen et al. (2020) constructed a latent graph sampled from the Hard-Kuma distribution, and combined a dependency parse graph with it to generate task-specific representation. Zhou et al. (2021) utilized a variant + +of Kirchhoff's Matrix-Tree Theorem to induce the task-specific aspect-centric latent dependency tree. + +# 7 Conclusion + +In this paper, we propose an MGFN model to address the disadvantages of latent graph based models for aspect-based sentiment analysis. We construct a novel SaLG to leverage the richer syntax dependency relation label information, and adaptively fuse the semantic information from SeG into SaLG to facilitate the learning of sentiment representation. Moreover, to capture more significant affective clues from surrounding contexts, we propose an affective refinement strategy in multi-graph fusion module. This strategy can guide MGFN to pay more attention to the opinion expressions of aspects. Extensive experiments on three datasets show that our model achieves the best performance. + +# Limitations + +Our MGFN model is designed for English datasets, thus it is only applicable to English remarks. Moreover, as we construct two graphs for every sentence and fuse the information of different kinds of graphs, the scale of graphs cannot be too large. That is, for a long text, our proposed MGFN cannot be applied to long texts. + +# Acknowledgements + +This work was supported by the National Natural Science Foundation of China(No. 61976051), the Major Key Project of PCL (No.PCL2021A09, PCL2021A02, PCL2022A03), and Guangdong Provincial Key Laboratory of Novel Security Intelligence Technologies (2022B1212010005). + +# References + +Zahra Abbasi-Moud, Hamed Vahdat-Nejad, and Javad Sadri. 2021. Tourism recommendation system based on semantic clustering and sentiment analysis. Expert Syst. Appl. +Abeer AlDayel and Walid Magdy. 2021. Stance detection on social media: State of the art and trends. Inf. Process. Manag. +Mohammed Fadhel Aljunid and Manjaiah Doddaghatta Huchaiah. 2021. An efficient hybrid recommendation model based on collaborative filtering recommender systems. CAAI Trans. Intell. Technol., 6(4):480-492. + +Anson Bastos, Abhishek Nadgeri, Kuldeep Singh, Isaiah Onando Mulang', Saeedeh Shekarpour, Johannes Hoffart, and Manohar Kaul. 2021. RECON: relation extraction using knowledge graph context in a graph neural network. In WWW '21: The Web Conference 2021, Virtual Event / Ljubljana, Slovenia, April 19-23, 2021, pages 1673-1685. ACM / IW3C2. +Marouane Birjali, Mohammed Kasri, and Abderrahim Beni Hssane. 2021. A comprehensive survey on sentiment analysis: Approaches, challenges and trends. Knowl. Based Syst., 226:107134. +Deyu Bo, Xiao Wang, Chuan Shi, and Huawei Shen. 2021. Beyond low-frequency information in graph convolutional networks. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI. +Erik Cambria, Yang Li, Frank Z. Xing, Soujanya Poria, and Kenneth Kwok. 2020. Senticnet 6: Ensemble application of symbolic and subsymbolic AI for sentiment analysis. In CIKM '20: The 29th ACM International Conference on Information and Knowledge Management. +Chenhua Chen, Zhiyang Teng, and Yue Zhang. 2020. Inducing target-specific latent structures for aspect sentiment classification. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP, pages 5596-5607. Association for Computational Linguistics. +Peng Chen, Zhongqian Sun, Lidong Bing, and Wei Yang. 2017. Recurrent attention network on memory for aspect sentiment analysis. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics, NAACL. +Li Dong, Furu Wei, Chuanqi Tan, Duyu Tang, Ming Zhou, and Ke Xu. 2014. Adaptive recursive neural network for target-dependent twitter sentiment classification. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, ACL. +Momchil Hardalov, Arnav Arora, Preslav Nakov, and Isabelle Augenstein. 2021. Cross-domain label-adaptive stance detection. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP. +Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2004. + +Bo Huang, Ruyan Guo, Yimin Zhu, Zhijun Fang, Guohui Zeng, Jin Liu, Yini Wang, Hamido Fujita, and Zhicai Shi. 2022. Aspect-level sentiment analysis with aspect-specific context position information. Knowl. Based Syst., 243. +Lianzhe Huang, Dehong Ma, Sujian Li, Xiaodong Zhang, and Houfeng Wang. 2019. Text level graph neural network for text classification. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP. +Yuan Huang, Zhixing Li, Wei Deng, Guoyin Wang, and Zhimin Lin. 2021. D-BERT: incorporating dependency-based attention into BERT for relation extraction. CAAI Trans. Intell. Technol., 6(4):417-425. +Jongmin Kim, Taesup Kim, Sungwooong Kim, and Chang D. Yoo. 2019. Edge-labeling graph neural network for few-shot learning. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR. +Rongjie Li, Songyang Zhang, Bo Wan, and Xuming He. 2021a. Bipartite graph network with adaptive message passing for unbiased scene graph generation. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR. +Ruifan Li, Hao Chen, Fangxiang Feng, Zhanyu Ma, Xiaojie Wang, and Eduard H. Hovy. 2021b. Dual graph convolutional networks for aspect-based sentiment analysis. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics, ACL, pages 6319-6329. Association for Computational Linguistics. +Qing Liao, Heyan Chai, Hao Han, Xiang Zhang, Xuan Wang, Wen Xia, and Ye Ding. 2021. An integrated multi-task model for fake news detection. IEEE Transactions on Knowledge and Data Engineering, pages 1-1. +Kaixin Ma, Filip Ilievski, Jonathan Francis, Yonatan Bisk, Eric Nyberg, and Alessandro Oltramari. 2021. Knowledge-driven data construction for zero-shot evaluation in commonsense question answering. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI. +Xiaobo Ma, Yongbin Liu, and Chunping Ouyang. 2022. Capturing semantic features to improve chinese event detection. CAAI Trans. Intell. Technol., 7(2):219-227. +Huy-Thanh Nguyen and Minh-Le Nguyen. 2018. *Effective attention networks for aspect-level sentiment classification*. In 10th International Conference on Knowledge and Systems Engineering, KSE. +Maria Pontiki, Dimitris Galanis, John Pavlopoulos, Harris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014. Semeval-2014 task 4: Aspect based sentiment analysis. In Proceedings of the 8th + +International Workshop on Semantic Evaluation, SemEval@COLING 2014. +Kai Sun, Richong Zhang, Samuel Mensah, Yongyi Mao, and Xudong Liu. 2019a. Aspect-level sentiment analysis via convolution over dependency tree. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP. +Kai Sun, Richong Zhang, Samuel Mensah, Yongyi Mao, and Xudong Liu. 2019b. Aspect-level sentiment analysis via convolution over dependency tree. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP. +Hao Tang, Donghong Ji, Chenliang Li, and Qiji Zhou. 2020. Dependency graph enhanced dual-transformer structure for aspect-based sentiment classification. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL, pages 6578-6588. Association for Computational Linguistics. +Jingyao Tang, Yun Xue, Ziwen Wang, Shaoyang Hu, Tao Gong, Yinong Chen, Haoliang Zhao, and Luwei Xiao. 2022. Bayesian estimation-based sentiment word embedding model for sentiment analysis. CAAI Trans. Intell. Technol., 7(2):144-155. +Kai Wang, Weizhou Shen, Yunyi Yang, Xiaojun Quan, and Rui Wang. 2020. Relational graph attention network for aspect-based sentiment analysis. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL, pages 3229-3238. Association for Computational Linguistics. +Wenya Wang, Sinno Jialin Pan, Daniel Dahlmeier, and Xiaokui Xiao. 2016. Recursive neural conditional random fields for aspect-based sentiment analysis. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP. +Xiaodi Wang, Xiaoge Pan, Tian Yang, Jianhua Xie, and Mingwei Tang. 2022. Aspect-based sentiment analysis using interaction matrix and global attention neural network. The Computer Journal. +Yuan Wang and Qian Wang. 2021. BATAE-GRU: attention-based aspect sentiment analysis model. In ISEEIE 2021: International Symposium on Electrical, Electronics and Information Engineering, Seoul Republic of Korea, February 19 - 21, 2021. +Zeguan Xiao, Jiarun Wu, Qingliang Chen, and Congjian Deng. 2021. BERT4GCN: using BERT intermediate layers to augment GCN for aspect-based sentiment classification. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP. + +Lvxiaowei Xu, Xiaoxuan Pang, Jianwang Wu, Ming Cai, and Jiawei Peng. 2022. Learn from structural scope: Improving aspect-level sentiment analysis with hybrid graph convolutional networks. CoRR, abs/2204.12784. +Liang Yao, Chengsheng Mao, and Yuan Luo. 2019. Graph convolutional networks for text classification. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI. +Chen Zhang, Qiuchi Li, and Dawei Song. 2019. Aspect-based sentiment classification with aspect-specific graph convolutional networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP. +Jiaxu Zhang, Gaoxiang Ye, Zhigang Tu, Yongtao Qin, Qianqing Qin, Jinlu Zhang, and Jun Liu. 2022. A spatial attentive and temporal dilated (SATD) GCN for skeleton-based action recognition. CAAI Trans. Intell. Technol., 7(1):46-55. +Ningyu Zhang, Xiang Chen, Xin Xie, Shumin Deng, Chuanqi Tan, Mosha Chen, Fei Huang, Luo Si, and Huajun Chen. 2021. Document-level relation extraction as semantic segmentation. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI. +Pinlong Zhao, Linlin Hou, and Ou Wu. 2020. Modeling sentiment dependencies with graph convolutional networks for aspect-level sentiment classification. Knowl. Based Syst., 193:105443. +Yuxiang Zhou, Lejian Liao, Yang Gao, Zhanming Jie, and Wei Lu. 2021. To be closer: Learning to link up aspects with opinions. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP. \ No newline at end of file diff --git a/affectiveknowledgeenhancedmultiplegraphfusionnetworksforaspectbasedsentimentanalysis/images.zip b/affectiveknowledgeenhancedmultiplegraphfusionnetworksforaspectbasedsentimentanalysis/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..3cf61cd26b284a5907e29410c102fcdb457c733a --- /dev/null +++ b/affectiveknowledgeenhancedmultiplegraphfusionnetworksforaspectbasedsentimentanalysis/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:079c6c76a318d762ac2ca2659d30ca6ee0cef9bf831068a901ecd0b310846b56 +size 570909 diff --git a/affectiveknowledgeenhancedmultiplegraphfusionnetworksforaspectbasedsentimentanalysis/layout.json b/affectiveknowledgeenhancedmultiplegraphfusionnetworksforaspectbasedsentimentanalysis/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..6d2944d8fd961d34dd1af8f34dd475a13e9ba229 --- /dev/null +++ b/affectiveknowledgeenhancedmultiplegraphfusionnetworksforaspectbasedsentimentanalysis/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6b9a722296a6c558cc675026e15f3806679fc6e9ddbfd91227feb14af635b70b +size 443447 diff --git a/afinegrainedchinesesoftwareprivacypolicydatasetforsequencelabelingandregulationcompliantidentification/046774d4-910e-4102-b0f1-7e1e61667fd4_content_list.json b/afinegrainedchinesesoftwareprivacypolicydatasetforsequencelabelingandregulationcompliantidentification/046774d4-910e-4102-b0f1-7e1e61667fd4_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..88bebefbf313c81886b76357cdb5aea2566e982a --- /dev/null +++ b/afinegrainedchinesesoftwareprivacypolicydatasetforsequencelabelingandregulationcompliantidentification/046774d4-910e-4102-b0f1-7e1e61667fd4_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a429df579afccd0acf91d27888d83dd5f9d1d8eb1ae4dc984ffa66fd3ac3c5f6 +size 85024 diff --git a/afinegrainedchinesesoftwareprivacypolicydatasetforsequencelabelingandregulationcompliantidentification/046774d4-910e-4102-b0f1-7e1e61667fd4_model.json b/afinegrainedchinesesoftwareprivacypolicydatasetforsequencelabelingandregulationcompliantidentification/046774d4-910e-4102-b0f1-7e1e61667fd4_model.json new file mode 100644 index 0000000000000000000000000000000000000000..e78afb1b7a37f91f26d64e60f0bfe880dde3b74d --- /dev/null +++ b/afinegrainedchinesesoftwareprivacypolicydatasetforsequencelabelingandregulationcompliantidentification/046774d4-910e-4102-b0f1-7e1e61667fd4_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:005e4097044cdedbe62aaa475cbe3070a66b17af7a20ff1cc66b1a63e9e48a55 +size 107842 diff --git a/afinegrainedchinesesoftwareprivacypolicydatasetforsequencelabelingandregulationcompliantidentification/046774d4-910e-4102-b0f1-7e1e61667fd4_origin.pdf b/afinegrainedchinesesoftwareprivacypolicydatasetforsequencelabelingandregulationcompliantidentification/046774d4-910e-4102-b0f1-7e1e61667fd4_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..e8d5026236d7b3e493ab530da5b43e5ac9ff838b --- /dev/null +++ b/afinegrainedchinesesoftwareprivacypolicydatasetforsequencelabelingandregulationcompliantidentification/046774d4-910e-4102-b0f1-7e1e61667fd4_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6809b8ea27cc0c556055a1e6586ddbacb8ddafdc211896acba6e2f1bf014e7cc +size 546591 diff --git a/afinegrainedchinesesoftwareprivacypolicydatasetforsequencelabelingandregulationcompliantidentification/full.md b/afinegrainedchinesesoftwareprivacypolicydatasetforsequencelabelingandregulationcompliantidentification/full.md new file mode 100644 index 0000000000000000000000000000000000000000..c92eb50bfee26330c06dd74275bbda9daa6147c9 --- /dev/null +++ b/afinegrainedchinesesoftwareprivacypolicydatasetforsequencelabelingandregulationcompliantidentification/full.md @@ -0,0 +1,353 @@ +# A Fine-grained Chinese Software Privacy Policy Dataset for Sequence Labeling and Regulation Compliant Identification + +Kaifa Zhao $^{1}$ , Le Yu $^{1}$ , Shiyao Zhou $^{1}$ , Jing Li $^{1}$ , Xiapu Luo $^{1}$ , Yat Fei Aemon Chiu $^{2}$ , Yutong Liu $^{2}$ + +$^{1}$ Department of Computing, The Hong Kong Polytechnic University, HKSAR, China + +$^{2}$ Department of Electronic and Information Engineering, The Hong Kong Polytechnic University, HKSAR, China + +$^{1}$ kaifa.zhao@connect.polyu.hk, lele08.yu@polyu.edu.hk, shiyao.zhou@connect.polyu.hk + +$^{1}\{jing-amelia.li, \text{daniel.xiapu.luo}\} @polyu.edu.hk$ + +$^{2}$ {yat-fei-dylan.zhao,yitang.liu}@connect.polyu.hk + +# Abstract + +Privacy protection raises great attention on both legal levels and user awareness. To protect user privacy, countries enact laws and regulations requiring software privacy policies to regulate their behavior. However, privacy policies are written in natural languages with many legal terms and software jargon that prevent users from understanding and even reading them. It is desirable to use NLP techniques to analyze privacy policies for helping users understand them. Furthermore, existing datasets ignore law requirements and are limited to English. In this paper, we construct the first Chinese privacy policy dataset, namely CA4P-483, to facilitate the sequence labeling tasks and regulation compliance identification between privacy policies and software. Our dataset includes 483 Chinese Android application privacy policies, over 11K sentences, and 52K fine-grained annotations. We evaluate families of robust and representative baseline models on our dataset. Based on baseline performance, we provide findings and potential research directions on our dataset. Finally, we investigate the potential applications of CA4P-4831 combing regulation requirements and program analysis. + +# 1 Introduction + +A privacy policy is a legal document written in natural language that discloses how and why a controller collects, shares, uses, and stores user data (GDPR, 2016; PISS, 2020; NISSTC, 2020). Privacy policies help users understand whether their privacy will be abused and decide whether to use the product. However, privacy policies are tedious, making it hard for users to read and understand them (Staff, 2011). Natural language processing techniques achieve big success on understanding document semantics (Yang et al., 2021; Wen et al., 2021; Ding et al., 2020). Thus, it is neces + +sary to apply natural language processing to analyze the privacy policies (Yu et al., 2016; Andow et al., 2020; Yu et al., 2015; Fan et al., 2020) and help users be aware of apps' privacy access behavior (Zhou et al., 2021). + +Chinese software privacy policy processing $(\mathsf{CSP}^3)$ task is a sequence labeling problem that recognizes privacy-related components in the sentences. $\mathsf{CSP}^3$ has two main unique features. First, privacy policies contain an amount of information inside (Yu et al., 2018), such as how the app stores user data, and how to contact app developer. In our dataset, we concentrate on data access-related sentences as the sentences directly related to user privacy. Second, privacy policies are written in a legally binding professional language and contain software jargon. Thus, it requires strong background (Zhou et al., 2022a,b) to understand the statements inside. Both characteristics prevent users from understanding the privacy policies. A well-annotated dataset can facilitate building automatic privacy policy analysis tool and further help users protect their privacy. + +Although privacy policy datasets have been proposed recently (Wilson et al., 2016; Zimmeck et al., 2019), labels in existing datasets are coarse-grained (i.e., sentence-level annotations (Wilson et al., 2016)) and limited to few privacy practices (Zimmeck et al., 2019). Besides, existing datasets only include English privacy policies, which limits the application of these datasets in regions with other languages. We construct a fine-grained Chinese dataset for software privacy policy analysis. + +In this work, we focus on Android application privacy policies as Android possesses the largest share of mobile operating systems (statcounter, 2022) and a large number of Android privacy data leaks have been revealed (Shrivastava and Kumar, 2021; Sivan et al., 2019). Unlike previous work (Wilson et al., 2016; Zimmeck et al., 2019), we deal with the problem using sequence label + +ing methods, and pay special attention to Chinese privacy policies. The motivations come from the following four aspects: + +First, worldwide regulation departments enact laws (NISSTC, 2020; PISS, 2020; GDPR, 2016; CCPA, 2016; CLPRC, 2016) to regulate the software's behaviors and protect users' privacy. The laws require the software to clarify how and why they need to access user data. Analyzing privacy policies can help users understand how app process their data and identify whether apps comply with laws. Second, for sequence labeling tasks, $\mathsf{CSP}^3$ aims to identify how and why the software collects, shares, and manages users' data according to regulations. $\mathsf{CSP}^3$ can be abstracted as identifying components in the privacy policy documents, such as data type and the purpose of using user data. NLP techniques can help automatically analyze privacy policies. Third, existing privacy policy analysis research is limited to English and totally omits other languages. With over 98.38 billion app downloads (Statista, 2022) and privacy-related regulations enacted in China, it is necessary and urgent to research $\mathsf{CSP}^3$ . Last but not least, recent research in other communities, such as software engineering (Yu et al., 2016; Nema et al., 2022) and cyber security (Andow et al., 2020, 2019), demonstrates requirements for analyzing privacy policies to help the analyst identify whether the apps' behavior is consistent with privacy policies. + +In this work, we make the following efforts to advance $\mathsf{CSP}^3$ : + +First, we construct a novel large-scale human-annotated Chinese Android application privacy policy dataset, namely CA4P-483. Specifically, we manually visit the software markets, such as Google Play (Google, 2022a) and AppGallery (Huawei, 2022a), check the provided privacy policy website, and download the Chinese version if available. We finally collect 483 documents. To determine the labels in the privacy policy analysis scenario, we read through Chinese privacy-related regulations and summarize seven components (§2.2). We annotate all occurrences of components in 11,565 sentences from 483 documents. Unlike paragraph-level annotations in existing privacy policy datasets (Wilson et al., 2016), CA4P-483 annotates character-level corpus. + +Second, based on CA4P-483, we summarize families of representative baselines for Chinese sequence labeling. In detail, we first evaluate the per + +formance of several classic sequence labeling models on our dataset, including Conditional Random Forest (CRF) (Kudo, 2005), Hidden Markov Model (HMM) (Morwal et al., 2012), BiLSTM (Graves and Schmidhuber, 2005), BiLSTM-CRF (Lample et al., 2016), and BERT-BiLSTM-CRF (Devlin et al., 2018). Recent work shows lattice knowledge improves the performance of Chinese sequence labeling tasks. We involve lexicon-based models, such as Lattice-LSTM (Zhang and Yang, 2018). + +Third, we investigate potential applications of CA4P-483. Combining law knowledge, we first identify whether the privacy policy violates regulation requirements based on CA4P-483. We also identify whether the app behaves consistently with privacy policy statements combing software analysis (Zhao et al., 2021; Zhou et al., 2020). + +The contributions of this work are three-fold: + +- To the best of our knowledge, we construct the first Chinese privacy policy dataset, namely CA4P-483, integrating abundant fine-grained annotations. +- We experimentally evaluate and analyze the results of different families of sequence labeling baseline models on our dataset. We also summarize difficulties in our dataset, and provide findings and further research topics on our dataset. +- We investigate potential applications of CA4P-483 to regulate privacy policies with law knowledge and program analysis technologies. + +# 2 Dataset Construction + +# 2.1 Dataset collection + +We manually collect the Chinese privacy policies from Android application markets. According to application market requirements (Huawei, 2022b; Google, 2022b), developers must provide privacy policies to claim their user data access behavior and to ensure apps will not violate laws or regulations. Since privacy policies are publicly available for users to understand the apps' access of personal data, three authors of this paper manually access the most popular apps in markets and visit their privacy policy websites provided at the moment (January 2021). We use html2text (Alir3z4, 2011) to extract context. Finally, we use tagtog (Cejuela et al., 2014) for document annotation. + +Next, we annotate CA4P-483 based on the law requirements. Specifically, we analyze Chinese privacy-related laws and regulations (NISSTC, + +![](images/6d8d683078f1a77061cc619b3e76cdcae9e466cce02a4c1b2d33554678ef25b9.jpg) +(a) Demo 1. + +![](images/1a8680286167416c1ccfe895378d84caab31c189ee9228fc221b0e5b19e70606.jpg) +(b) Demo 2. + +![](images/fc8cc97a7af51e9effa365e057090d86a43df1a7c6352155c5d2c64a47517ed2.jpg) +(c) Annotation legend. +Figure 1: Annotation demos from CA4P-483. We translate the statements into English for illustration. + +2020; PISS, 2020; of China et al., 2019; Committee, 2022), and find requirements for apps' privacy process behavior. For example, GB/T41391-2022 Article 4.n) claims that "developers should expressly state the purpose of applying or collecting information to the subject of personal information." Finally, we summarize seven types of labels related to requirements for apps' access to user data. + +# 2.2 Fine-grained annotations + +For each privacy policy, we concentrate on the sentences that describe the data process behavior. After locating the sentences, we annotate seven components, i.e., the controller, data entity, collection, sharing, condition, purpose, and receiver. + +Data controller. According to regulation requirements, the data controller is the party that determines the purpose and means of personal data processing. A data controller could be the app (first party) or the third party. As is shown in Fig.1, data controllers are "third-party platforms" in Fig.1(a) while that is "we" in Fig.1(b). Thus, we annotate data controllers according to sentence semantics, i.e., who is responsible for processing the data. + +Data entity. Data entities are any information that can identify or reflect the activities of a natural person (PISS, 2020). Recent research (Cai and Chen, 2012; Shokri et al., 2017) demonstrates the probability of combining various information to infer and even locate a specific person. Thus, we annotate all data nouns or noun phrases that are requested in privacy policies, including sensitive + +information, such as device id, and normal information, such as device type. + +Collection. Collection actions are verbs that describe how controllers access data, such as gather (收集), obtain (获取). + +Sharing. Sharing actions are verbs that indicate whether the data controller will distribute data to others. Although both Sharing and Collection describe how the party access user data, we difference them according to the requirements of laws on the action, such as Article 5 and 9.2 in (PISS, 2020). + +Condition. Condition describes the situation where the data controller will access personal data. Laws require data controllers to inform users under what conditions their data will be processed. For example, bank apps may require the users' identification information when activating bank account. + +Purpose. Purpose should claim why the data controller processes user data. Laws enact specific requirements for user data access. For example, PISS Article 4.d) requires controllers to clearly state purpose of processing data. Purpose can also help the users understand why the app collects their data and further determine whether to give the consent as is shown in Fig.1(a). + +Data receiver. Data receiver describes the parties that receive user data. Laws not only ask apps to clarify who will get shared data (PISS, 2020) but also restrict the data receivers' behavior (NISSTC, 2020), such as why processing user data. + +# 2.3 Human annotation process + +Our privacy policy annotation consists of two phases: coarse-grained annotation and fine-grained annotation. Coarse-grained annotation labels privacy policies at paragraph level following previous work (Wilson et al., 2016). Fine-grained annotation labels our defined components at the word level based on coarse-grained annotation. + +For the first phase, three authors of this paper, who have researched privacy policies and software engineering for over eight and three years, label ten privacy policies for reference and record a video instruction to guide annotators. Then, we hire thirty undergraduates in our university to annotate the dataset. The three instructors train each annotator for at least four hours to be familiar with the dataset and requirements. Students are asked to annotate 1000 Android apps' privacy policies in Chinese and each privacy policy should be analyzed for at least 30 minutes to ensure quality. Each privacy policy is allocated to at least four annotators. Finally, three instructors inspect each annotation. + +For the second phase, we select two undergraduates, who coarse-grained annotate the documents with high precision, to conduct the fine-grained annotation. Specifically, we select 483 documents that are well coarse-grained annotated after inspection. Instructors first annotate ten documents to lead undergraduates to annotate. The annotators also keep discussing with instructors once the role of components in sentences are unclear. Each annotator is required to label each privacy policy for at least 30 minutes to guarantee the dataset quality. + +Finally, the instructors analyze the annotations and use Fleiss' Kappa metrics (Cohen, 1960; Wilson et al., 2016) to evaluate the agreements. Table 1 shows the average Kappa value $(77.20\%)$ satisfies the substantial agreement, i.e., Kaapa value lies in 0.61-0.80, and four components achieve almost perfect agreement (0.81-1.00). The Condition, which only gets moderate agreement, is caused by the overlap between labels (details in Appendix 9.3). + +# 2.4 Dataset statistics and comparison + +We conduct statistical analysis and show the results in Table 1. CA4P-483 is split into training, development, and test set. Table 1 also gives details of the number of different labels in each set. Table 1 shows that the average length of condition and purpose is much longer than other corpora as the two types are generally in the form of clauses. + +We compare CA4P-483 with related datasets in + +
# doc483
# sentences11,565
# sentences with ann3,385
Avg sentences len79.06
TypeNumTrainDevTestAvg lenKappa
Data21,24118,9252,5212,3314.6885.39%
Collect5,1344,1335765282.0373.78%
Share4,9763,9895335052.1084.87%
Controller8,4246,0858157822.4982.22%
Condition4,9175,47771671314.4150.07%
Receiver3,2022,7763603504.2989.88%
Purpose4,6836,44286086719.2474.18%
Total52,57747,8276,3816,076
+ +Table 1: The statistics of CA4P-483. Here, "Avg" denotes average, "ann" denotes annotation, "len" denotes length, "#" denotes the number of. + +Table 2. We first compare our corpus with Chinese sequence labeling datasets, such as MSRA (Zhang et al., 2006), OntoNotes (Weischedel et al., 2011), Weibo (Peng and Dredze, 2016), PeopleDairy (Zhang and Chen, 2017), Resume (Zhang and Yang, 2018), CLUENER2020 (Xu et al., 2020), and CNERTA (Sui et al., 2021). We also involve widely used English sequence labeling datasets, namely Twitter-2015 (Zhang et al., 2018) and Twitter-2017 (Lu et al., 2018). We also consider privacy policy datasets, namely Online Privacy Policies (OPP-115) (Wilson et al., 2016) and Android app privacy policies (APP-350) (Zimmeck et al., 2019). + +We first compare the size and classes in different datasets. Table 2 shows that CA4P-483 contains abundant semantics, i.e., CA4P-483 has seven annotation classes that are larger than most other datasets (seven out of nine). For privacy policy-related datasets, the comparison is conducted with the number of documents as one privacy policy corresponds to one app. OPP-115 annotates at the sentence level, and APP-350 only annotates data controller, data entities, and modifiers. Since APP-350 specifies data entities into 16 categories, APP-350 exhibits more number of classes than CA4P-483. To summarize, CA4P-483 is the first and largest Chinese Android privacy policy dataset with abundant semantic labels. + +# 3 Task and Experiment Setup + +# 3.1 Task description + +$\mathsf{CSP}^3$ figures out who collects or shares what kind of data to whom, under which kind of condition, and for what. The underlined words correspond to each type of annotations. As $\mathsf{CSP}^3$ concentrates + +
Dataset# Train# Dev# TestSizeLanguage# Class
MSRA41,7284,6364,36550KChinese3
PeopleDairy20,8642,3184,63623kChinese3
Weibo1,3502702702kChinese4
Resume3,8214634772kChinese8
CLUENER202010,7481,3431,34513KChinese10
CNERTA34,1024,4404,44542,987Chinese3
Twitter-20156,1761,5465,07812,784English4
Twitter-20174,2901,4321,4597,181English4
CA4P-48314,6782,0591,84218,579Chinese7
Dataset# Train doc# Dev doc# Test docSizeLanguage# Class
OPP-11575 doc/40 doc115 docEnglish12
APP-350188 doc62 doc100 doc350 docEnglish18
CA4P-483386 doc48 doc49 doc483 docChinese7
+ +Table 2: A comparison between CA4P-483 and other popular sequence labeling datasets. # denotes "number". "doc" denotes "documents". + +on data access-related sentences, we first locate the sentences based on data collection and sharing words (Andow et al., 2020; Yu et al., 2016). We summarize the word list based on laws, app market requirements and previous works (Yu et al., 2016; Andow et al., 2019, 2020) (detailed in Appendix 9.1). Given the sentences $C = c_{1}, c_{2}, \ldots, c_{n}$ and its labels $L = l_{1}, l_{2}, \ldots, l_{n}$ , where $c_{i}$ denotes the $i$ -th Chinese characters and $l_{i}$ denotes the $c_{i}$ 's label, the task is to identify sequence labels. + +# 3.2 Summarize models + +This section introduces baseline methods for sequence labeling task on CA4P-483. + +# 3.2.1 Probabilistic models + +Hidden Markov Model (HMM): HMM² (Freitag and McCallum, 2000) is one of the most classic probabilistic models and is applied as our baselines. Condition Random Field (CRF): CRF³ (Lafferty et al., 2001) aggregates the advantages of HMM and counters the label bias problems. + +# 3.2.2 Neural network models + +BiLSTM: BiLSTM² (Graves and Schmidhuber, 2005) uses neural network to learn a mapping relation from sentences to labels through the nonlinear transformation in high-dimensional space. + +BiLSTM-CRF: BiLSTM-CRF $^2$ uses BiLSTM as a encoder to map the sentences in to a hingh dimension vector and uses CRF as a decoder. + +BERT-BiLSTM-CRF: Since BiLSTM-CRF is still limited to the word vector presentation, BERT- + +BiLSTM-CRF $^4$ (Dai et al., 2019) uses BERT as a feature extractor and takes advantage of BiLSTM and CRF for sequence labeling. + +# 3.2.3 Lattice enhanced models + +As Chinese words are not naturally separated by space, character-based methods omit the information hidden in word sequences. Thus, lattice-based methods that integrate lattice information are proposed for Chinese sequence labeling and achieve the promised performance. + +LatticeLSTM: LatticeLSTM $^5$ (Zhang and Yang, 2018) takes inputs as the character sequence together with all character subsequences that match the words in a predefined lexicon dictionary. + +# 3.3 Setup and implementation details + +We evaluate baselines on an Ubuntu 20.04 server with 5 NVIDIA GeForce 3090 (24 GB memory for each), 512 GB memory, and an Intel Xeon 6226R CPU. Next, we present our implementation details. For HMM, the number of states, i.e., class number in our dataset with the BIO tag, is set as 22, and the number of observations, i.e., the number of different characters, is set as 1756, which is default value2. For CRF, we use the default settings in $\mathrm{CRF}++^3$ . For BiLSTM and BiLSTM-CRF, embedding size is 128, learning rate is 0.001, and we train models using 30 epochs with a batch size of 64. For BERT-BiLSTM-CRF4, we use the Chinese bert-base6 pretrained model and fine tune it on our training data. The BiLSTM is set with 128 hidden + +4 https://github.com/macany/BERT-BiLSTM-CRF-NER +5 https://github.com/LeeSureman/Batch_Parallel_LatticeLSTM +$^{6}$ https://github.com/google-research/bert + +
BiLSTM-CRFBERT-BiLSTM-CRFLatticeLSTMManual Agreements
PRF1PRF1PRF1PRF1
Collect51.80%57.50%54.47%50.59%68.89%58.34%69.23%67.05%65.10%96.30%92.07%94.14%
Condition81.75%72.76%77.00%31.59%46.46%37.61%72.76%77.00%81.75%93.53%84.50%88.79%
Data77.85%58.60%66.44%51.11%67.19%58.06%58.60%66.44%77.85%96.20%91.79%93.94%
Controller64.10%61.08%62.50%56.53%63.80%59.94%61.08%62.50%64.10%96.96%90.18%93.45%
Purpose70.88%54.61%60.64%40.45%48.46%44.09%54.61%60.64%70.88%95.64%92.61%94.10%
Share68.31%51.83%58.88%59.08%45.61%51.48%51.83%58.88%68.31%96.10%94.71%95.40%
Receiver91.70%92.68%92.19%22.96%27.84%25.17%92.68%92.19%91.70%97.33%85.00%90.75%
O91.70%92.68%92.19%46.22%57.35%51.18%92.57%92.79%92.35%///
Average86.94%86.90%86.84%37.54%49.42%42.66%72.27%72.27%72.27%96.01%90.12%92.94%
+ +Table 3: Evaluation performance of three types of methods on our dataset. "O" denotes others. + +
PRF1
HMM77.47%66.11%69.63%
CRF85.52%86.28%85.63%
BiLSTM85.13%85.99%85.05%
BiLSTM-CRF86.94%86.90%86.84%
BERT-BiLSTM-CRF46.22%57.35%51.18%
Lattice-LSTM78.63%80.75%79.67%
+ +Table 4: Overall performance of baseline methods on our dataset. + +layer and a learning rate of 1e-5. BERT-BiLSTM-CRF model is trained on our dataset with default settings4 where the batch size is 64, learning rate is $1e^{(-5)}$ , dropout rate is 0.5, gradient clip is 0.5, and early stop strategy is "stop if no decrease". For Lattice-LSTM, we use the same lattice provided in (Zhang and Yang, 2018). + +# 4 Evaluation + +In this section, we evaluate baseline methods on all 18,579 sentences that are divided into training, development, and testing sets as detailed in Table 2. Following previous research (Wilson et al., 2016; Sui et al., 2021), we apply precision (P), recall (R), and F1-score (F1) to evaluate baselines. + +Table 4 shows the overall performance of families of baselines on CA4P-483. Table 4 shows that BiLSTM-CRF achieves the most promising performance, which may benefit from the enhanced presentation ability of bidirectional LSTM and CRF for capturing the context information. LatticeLSTM performs a strong representation of capturing lattice information, while some clauses in our labels may mislead the model learning the patterns. + +We analyze the identification performance of each component to investigate the challenges and limitations of CA4P-483. Table 3 demonstrates the detailed performance of baselines, i.e., CRF + +![](images/961c3b71bf2f7f5e05445cee77d12e9952c687fd1f3b8f1740422491cb782a18.jpg) +Figure 2: Confusion matrix of BiLSTM-CRF results on CA4P-483. + +based models, BERT-based models, and Lattice-based models. Besides, we also compare the performance with manual agreements to demonstrate task difficulties. Table 3 demonstrates that BiLSTM-CRF and Lattice-LSTM achieve over $90\%$ performance on Receiver because the Receiver possesses few overlaps with other labels and is in the format of words. Collect and share only achieve around $60\%$ precision and F1-score because the two types of entities perform some overlapping, as is shown in Fig.1 and Fig.5. Table 3 shows that BiLSTM-CRF achieves better precision on Condition than Lattice-LSTM, which may be caused by the fact that Condition and Purpose are mainly in the format of attributive clauses rather than words. + +Next, we analyze the confusion matrix of BiLSTM-CRF results that performs the best on CA4P-483. In Fig.2, the depth of background color denotes the proportion of classification, the darker the color the higher the proportion, and the digit denotes the number of classification results. Fig.2 indicates that most of the misclassification samples are related to Condition. + +![](images/4bc6ae71ca4be9fa2aff44f5243deb5d90d58c121da9f9e78e028e8ec339d973.jpg) +(a) Missing condition. + +![](images/7e0c1412a0dd8051ee8227a910d2d216e6159d1c8476624aca8af9b694c8541b.jpg) +(b) Error prediction when controller is user. + +To have a deep understanding of divergences between ground truth and predictions, we inspect the misclassifications. We find that the algorithm may fail to identify Conditions, which are in the adverbial clause as shown in Fig.3(a) where the highlighting for Chinese is ground truth and highlighting for English is prediction results. Besides, when the data controller is the user, as is shown in Fig.3(b), the algorithms fail to distinguish Purpose and Condition. More illustrations in Appendix 9.3 also reveal that models need to be well designed to learn deep semantic information, such as distinguishing overlapping among components, and distinguishing Purpose in modifiers. + +# 5 Case Study + +In this section, we will present cases of potential applications of CA4P-483, such as whether privacy policies comply with regulatory requirements and whether privacy policies is consistent with the apps' functionalities. + +Regulation compliance identification. Chinese privacy-related laws (PISS, 2020; NISSTC, 2020; CLPRC, 2016) ask developers to clearly claim purpose conditions for processing user privacy data. We first investigate the distribution of annotations in CA4P-483. Fig.4 sketches the box plot of the frequency of components in each privacy policy. Fig.4 indicates that some privacy policies claim data processing without clarifying the purpose and condition, i.e., the minimum frequency of Data is positive while that of Purpose is zero. We manually inspect privacy policies. We find that the privacy policies, whose package name is com.yitongweather, claim the app collects users' data while omitting to give the purposes or conditions of data access, which violates regulation requirements. Thus, CA4P-483 can facilitate the + +![](images/8ef95504e4cfa9bd66caf2e8cdb9013e41867d67a18cd3099dba5236cd30786c.jpg) +Figure 3: The visualization of divergence between ground truth and prediction. +Figure 4: Components distribution of CA4P-483. + +research in the area of privacy compliance identification (Andow et al., 2019; Barth et al., 2022). + +App behavior consistency identification. To improve the security of the Android community, researchers design systems (Andow et al., 2020; Yu et al., 2018) to identify the consistency between privacy policies and app behaviors to prevent apps from abusing user data or conducting malicious behavior. One popular method to check the app's behavior is dynamic analysis (Yan and Yin, 2012), i.e., running the app on the device and checking the log information. To investigate the application of CA4P-483 in security community, we first identify the privacy policies without purpose or condition components. Then, we install the app on one smartphone, manually interact with the app and try our best to trigger all possible functions in the app by clicking every visible buttons. We use logcat to capture the app's running information. We find that the app (id: com.chengmi/signin) requests device storage to use the app's functionalities while no condition-related statements are claimed in its privacy policy. With more intelligent automatic software engineering techniques, CA4P-483 can facilitate the research in this area, and more vulnerabilities in the consistency between app behavior and privacy policy could be investigated. + +# 6 Discussion + +In this section, we first discuss difficulties in CA4P-483. Then, we propose potential research topics on CA4P-483. Finally, we discuss limitations of CA4P-483. Besides, we also discuss ethical concerns in Appendix 9.2. + +# 6.1 Dataset difficulties + +Based on evaluation results in §4 and related work, we raise the following difficulties: 1) How to distinguish overlaps between components? 2) How to effectively deal with length variation of components? 3) Difficulties in semantic analysis. + +Different from traditional sequence labeling tasks, components in our data set may contain other components. One scenario is the Purpose or Conditions maybe used to decorate the data, for example, "We will collect your login information (我们会收集您的登录信息)" where the login may be understood as the purpose of information. Since traditional sequence labeling methods predict one character with one label, it is hard to distinguish components overlaps. One possible solution is using multi-model algorithms (Sui et al., 2021) that demonstrate effectiveness for distinguishing boundaries between entities. Similar to traditional news or social media datasets that use voice or images as additional information, integrating apps' analysis results help distinguish different components. + +Second, existing sequence labeling tasks mainly concentrate on entity recognition, while practical applications may require labeling clauses for further analysis. Table 1 shows that average length of components in CA4P-483 varies from 2.03 to 19.24. $\mathrm{CSP}^3$ not only require identifying words but also ask the models to identify the role of clauses. + +The semantic analysis of privacy policies is still a difficulty. Laws require apps to clearly clarify how apps collect and share user data. Privacy policies can claim that apps will share data with third parties or third parties will collect user data. In this way, it becomes essential to understand the context to distinguish the controller and action type. It could be a solution to use multi-model algorithms integrating program analysis to improve the performance; however, identifying the third party and app itself remains a challenge in program analysis. + +# 6.2 Further directions + +The CA4P-483 enables research in directions of interest to natural language processing, privacy protection, and cyber security (Zhu et al., 2022a,b). We propose some potential research interests for further work below. + +Emotional analysis in privacy policies. Existing research (Andow et al., 2019) figures out privacy policies may conflict among contexts. For example, the privacy policy may claim NOT to collect user + +data in one sentence while claiming to access user data in other sections. Existing methods (Andow et al., 2019, 2020; Yu et al., 2018) use negative words to identify whether conflicts exist in the privacy policy and ignore complications like a double negative. In Chinese privacy policy, negative representations are more complicated (Liu, 2012). Thus, emotional analysis can help analysts better understand the semantics of privacy policies.. + +Privacy compliance detection. CA4P-483 provides detailed labels for data usage, including the purpose and conditions. It is necessary to investigate the detailed requirements of laws and further identify whether existing privacy policies violation. + +Cyber security investigation. Privacy policies ought to reflect the functionalities of apps. Some apps may conceal the malicious behavior in their functionalities and do not claim the behavior in privacy policies. CA4P-483 can help identify the consistency between apps' functionalities and behavior by combing natural language process algorithms and code analysis. + +# 6.3 Limitations + +CA4P-483 provides detailed annotations for data access statements in privacy policies. However, analyzing privacy policies using CA4P-483 depends on the performance of locating data access-related sentences. We use data collection and sharing words to locate the sentences. However, some Purpose and Condition claims maybe given as an enumeration format, such as "we will not share your personal data under the following conditions". CA4P-483 is limited when capturing information in enumeration format. + +Privacy policies possess timeliness. App developers should provide a privacy policies when publishing the apps. When the apps' functionality updates, the privacy policies ought to be updated accordingly. The data set is limited to the timestamp we collected. When combining our dataset with program analysis, this factor should be considered. + +# 7 Related Work + +Prior privacy policy datasets are all English and omit other languages. OPP-115 (Wilson et al., 2016) collects 115 English websites' privacy policies and makes annotations at the sentence level. OPP-115 designs labels based on previous works (McDonald and Cranor, 2008; Staff, 2011). APP-350 (Zimmeck et al., 2019) gathers Android + +apps' privacy policies written in English. APP-350 only conducts limited annotations, including two types of data controllers, namely first party and third party, thirteen types of specific data, and two types of modifiers, i.e., do and do not. + +Existing Chinese sequence labeling datasets are generally gathered from News (Zhang et al., 2006; Zhang and Chen, 2017; Sui et al., 2021) and social media (Peng and Dredze, 2016; Weischedel et al., 2011; Zhang and Yang, 2018). The datasets include abundant corpus, but their annotations are limited to location, person name, and organization. Even though CLUENER2020 (Xu et al., 2020) expands the labels, such as game, gvoerment, book, the datasets are still hard to be applied in specific downstream tasks. CNERTA (Sui et al., 2021) includes another media data, i.e., voice data, to improves the sequence labeling performance. + +# 8 Conclusion + +This paper introduces the first Chinese Android application privacy policy dataset, CA4P-483. CA4P-483 contains fine-grained annotations based on requirements of privacy-related laws and regulations. The dataset can help promote natural language processing research on practical downstream tasks. We also conduct experimental evaluations of popular baselines on our dataset and propose potential research directions based on the result analysis. We also conduct case studies to investigate potential applications of our dataset and potential application of our dataset to help software engineering and cyber security protect user privacy. In the future, we hope that we can build new models for CA4P-483 to counter the challenges. + +# Acknowledgements + +We thank the anonymous reviewers for their insightful comments. This work was partially supported by Hong Kong RGC Project (PolyU15224121), NSFC Young Scientists Fund (62006203), and HKPolyU collaborative research project (ZGBE). + +# References + +Alir3z4.2011.html2text. https://github.com/Alir3z4/html2text. +Benjamin Andow, Samin Yaseer Mahmud, Wenyu Wang, Justin Whitaker, William Enck, Bradley Reaves, Kapil Singh, and Tao Xie. 2019. Policyint: investigating internal privacy policy contradictions + +on google play. In 28th {USENIX} Security Symposium ( {USENIX} Security 19), pages 585-602. +Benjamin Andow, Samin Yaseer Mahmud, Justin Whitaker, William Enck, Bradley Reaves, Kapil Singh, and Serge Egelman. 2020. Actions speak louder than words: Entity-Sensitive privacy policy and data flow analysis with PoliCheck. In 29th USENIX Security Symposium (USENIX Security 20), pages 985-1002. USENIX Association. +Susanne Barth, Dan Ionita, and Pieter Hartel. 2022. Understanding online privacy—a systematic review of privacy visualizations and privacy by design guidelines. ACM Computing Surveys (CSUR), 55(3):1-37. +Liang Cai and Hao Chen. 2012. On the practicality of motion based keystroke inference attack. In International Conference on Trust and Trustworthy Computing, pages 273-290. Springer. +CCPA. 2016. California consumer privacy act regulations. https://govt.westlaw.com/calregs. +Juan Miguel Cejuela, Peter McQuilton, Laura Ponting, Steven J Marygold, Raymund Stefancsik, Gillian H Millburn, Burkhard Rost, FlyBase Consortium, et al. 2014. tagtog: interactive and text-mining-assisted annotation of gene mentions in plos full-text articles. Database, 2014. +CLPRC. 2016. Cybersecurity law of the people's republic of china. http://www.gov.cn/xinwen/2016-11/07/content_5129723.htm. +Jacob Cohen. 1960. A coefficient of agreement for nominal scales. Educational and psychological measurement, 20(1):37-46. +National Information Security Standardization Technical Committee. 2022. Information security technology — basic specification for collecting personal information in mobile internet applications. http://www.cac.gov.cn/1124853418_15652571749671n.pdf. +Zhenjin Dai, Xutao Wang, Pin Ni, Yuming Li, Gangmin Li, and Xuming Bai. 2019. Named entity recognition using bert bilstm crf for chinese electronic health records. In 2019 12th international congress on image and signal processing, biomedical engineering and informatics (cisp-bmei), pages 1-5. IEEE. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. +Keyang Ding, Jing Li, and Yuji Zhang. 2020. Hashtags, emotions, and comments: a large-scale dataset to understand fine-grained social emotions to online topics. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1376-1382. + +Ming Fan, Le Yu, Sen Chen, Hao Zhou, Xiapu Luo, Shuyue Li, Yang Liu, Jun Liu, and Ting Liu. 2020. An empirical evaluation of gdpr compliance violations in android mhealth apps. In 2020 IEEE 31st international symposium on software reliability engineering (ISSRE), pages 253-264. IEEE. +Dayne Freitag and Andrew McCallum. 2000. Information extraction with hmm structures learned by stochastic optimization. AAAI/IAAI, 2000:584-589. +GDPR. 2016. General data protection regulation. https://gdpr-info.eu. +Google. 2022a. Google play. https://play.google.com. +Google. 2022b. Google play policies. https://developer.android.com/distribute/play-policies. +Alex Graves and Jürgen Schmidhuber. 2005. Framewise phoneme classification with bidirectional LSTM and other neural network architectures. *Neural networks*, 18(5-6):602-610. +Huawei. 2022a. App gallery. https://appgallery.huawei.com/Featured. +Huawei. 2022b. Appgallery review guidelines. https://developer.huawei.com/consumer/en/doc/30202. +Taku Kudo. 2005. Crf++: Yet another crf toolkit. http://crfpp.sourceforge.net/. +John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. +Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of NAACL-HLT, pages 260-270. +Bing Liu. 2012. Sentiment analysis and opinion mining. Synthesis lectures on human language technologies, 5(1):1-167. +Di Lu, Leonardo Neves, Vitor Carvalho, Ning Zhang, and Heng Ji. 2018. Visual attention model for name tagging in multimodal social media. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1990-1999. +Aleecia M McDonald and Lorrie Faith Cranor. 2008. The cost of reading privacy policies. Isjlp, 4:543. +Sudha Morwal, Nusrat Jahan, and Deepti Chopra. 2012. Named entity recognition using hidden markov model (hmm). International Journal on Natural Language Computing (IJNLC) Vol, 1. + +Preksha Nema, Pauline Anthonysamy, Nina Taft, and Sai Teja Peddinti. 2022. Analyzing user perspectives on mobile app privacy at scale. In International Conference on Software Engineering (ICSE). +NISSTC. 2020. Cybersecurity practices guidelines - security guidelines for using software development kit (sdk) for mobile internet applications (app) (tc260-pg-20205a). https://www.tc260.org.cn/front/postDetail.html?id=20201126161240. +Cyberspace Administration of China, Ministry of Industry, Information Technology, Ministry of Public Security, and State Administration for Market. 2019. Measures for determining the illegal collection and use of personal information by apps. http://m.legaldaily.com.cn/zt/content/2021-11/16/content_8628724.htm. +Nanyun Peng and Mark Dredze. 2016. Improving named entity recognition for chinese social media with word segmentation representation learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 149-155. +PISS. 2020. Information security technology - personal information security specification. https://www.tc260.org.cn/front/postDetail.html?id=20200918200432. +Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. 2017. Membership inference attacks against machine learning models. In 2017 IEEE symposium on security and privacy (SP), pages 3-18. IEEE. +Gulshan Shrivastava and Prabhat Kumar. 2021. Android application behavioural analysis for data leakage. Expert Systems, 38(1):e12468. +Nir Sivan, Ron Bitton, and Asaf Shabtai. 2019. Analysis of location data leakage in the internet traffic of android-based mobile devices. In 22nd International Symposium on Research in Attacks, Intrusions and Defenses (RAID 2019), pages 243-260. +FTC Staff. 2011. Protecting consumer privacy in an era of rapid change—a proposed framework for businesses and policymakers. *Journal of Privacy and Confidentiality*, 3(1). +statcounter. 2022. Mobile operating system market share worldwide. https://gs.statcounter.com/os-market-share/mobile/worldwide. +Statista. 2022. Number of mobile app downloads worldwide from 2019 to 2021, by country. https://www.statista.com/statistics/1287159/app-downloads-by-country/. +Dianbo Sui, Zhengkun Tian, Yubo Chen, Kang Liu, and Jun Zhao. 2021. A large-scale chinese multimodal ner dataset with speech clues. In Proceedings of the + +59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2807-2818. +Ralph Weischedel, Sameer Pradhan, Lance Ramshaw, Martha Palmer, Nianwen Xue, Mitchell Marcus, Ann Taylor, Craig Greenberg, Eduard Hovy, Robert Belvin, et al. 2011. Ontonotes release 4.0. LDC2011T03, Philadelphia, Penn.: Linguistic Data Consortium. +Zhiyuan Wen, Jiannong Cao, Ruosong Yang, Shuaiqi Liu, and Jiaxing Shen. 2021. Automatically select emotion for response via personality-affected emotion transition. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP* 2021, pages 5010–5020. +Shomir Wilson, Florian Schaub, Aswarth Abhilash Dara, Frederick Liu, Sushain Cherivirala, Pedro Giovanni Leon, Mads Schaarup Andersen, Sebastian Zimmeck, Kanthashree Mysore Sathyendra, N Cameron Russell, et al. 2016. The creation and analysis of a website privacy policy corpus. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1330-1340. +Liang Xu, Qianqian Dong, Cong Yu, Yin Tian, Weitang Liu, Lu Li, and Xuanwei Zhang. 2020. Cluener2020: Fine-grained name entity recognition for chinese. arXiv preprint arXiv:2001.04351. +Lok Kwong Yan and Heng Yin. 2012. {DroidScope}: Seamlessly reconstructing the {OS} and dalvik semantic views for dynamic android malware analysis. In 21st USENIX security symposium (USENIX security 12), pages 569-584. +Yu Yang, Jiannong Cao, Milos Stojmenovic, Senzhang Wang, Yiran Cheng, Chun Lum, and Zhetao Li. 2021. Time-capturing dynamic graph embedding for temporal linkage evolution. IEEE Transactions on Knowledge and Data Engineering. +Le Yu, Xiapu Luo, Jiachi Chen, Hao Zhou, Tao Zhang, Henry Chang, and Hareton K. N. Leung. 2018. Ppchecker: Towards accessing the trustworthiness of android apps' privacy policies. IEEE Transactions on Software Engineering. +Le Yu, Xiapu Luo, Xule Liu, and Tao Zhang. 2016. Can we trust the privacy policies of android apps? In 2016 46th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN), pages 538-549. IEEE. +Le Yu, Tao Zhang, Xiapu Luo, and Lei Xue. 2015. Autoppg: Towards automatic generation of privacy policy for android applications. In Proceedings of the 5th Annual ACM CCS Workshop on Security and Privacy in Smartphones and Mobile Devices. +Jingyuan Zhang and Mingjie Chen. 2017. People dairy. https://github.com/zjy-ucas/ ChineseNER/. + +Qi Zhang, Jinlan Fu, Xiaoyu Liu, and Xuanjing Huang. 2018. Adaptive co-attention network for named entity recognition in tweets. In Thirty-Second AAAI Conference on Artificial Intelligence. +Suxiang Zhang, Ying Qin, Wen-Juan Hou, and Xiaojie Wang. 2006. Word segmentation and named entity recognition for sigan bakeoff3. In Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing, pages 158-161. +Yue Zhang and Jie Yang. 2018. Chinese ner using lattice lstm. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1554-1564. +Kaifa Zhao, Hao Zhou, Yulin Zhu, Xian Zhan, Kai Zhou, Jianfeng Li, Le Yu, Wei Yuan, and Xiapu Luo. 2021. Structural attack against graph based android malware detection. In Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, pages 3218-3235. +Hao Zhou, Xiapu Luo, Haoyu Wang, and Haipeng Cai. 2022a. Uncovering intent based leak of sensitive data in Android framework. In ACM Conference on Computer and Communications Security (CCS). +Hao Zhou, Haoyu Wang, Xiapu Luo, Ting Chen, Yajin Zhou, and Ting Wang. 2022b. Uncovering cross-context inconsistent access control enforcement in android. In The 2022 Network and Distributed System Security Symposium (NDSS'22). +Hao Zhou, Haoyu Wang, Shuohan Wu, Xiapu Luo, Yajin Zhou, Ting Chen, and Ting Wang. 2021. Finding the missing piece: Permission specification analysis for android ndk. In 2021 36th IEEE/ACM International Conference on Automated Software Engineering (ASE), pages 505-516. IEEE. +Hao Zhou, Haoyu Wang, Yajin Zhou, Xiapu Luo, Yutian Tang, Lei Xue, and Ting Wang. 2020. Demystifying diehard android apps. In 2020 35th IEEE/ACM International Conference on Automated Software Engineering (ASE), pages 187-198. IEEE. +Yulin Zhu, Yuni Lai, Kaifa Zhao, Xiapu Luo, Mingquan Yuan, Jian Ren, and Kai Zhou. 2022a. Adversarial robustness of graph-based anomaly detection. arXiv preprint arXiv:2206.08260. +Yulin Zhu, Yuni Lai, Kaifa Zhao, Xiapu Luo, Mingquan Yuan, Jian Ren, and Kai Zhou. 2022b. Binarizedattack: Structural poisoning attacks to graph-based anomaly detection. In 2022 IEEE 38th International Conference on Data Engineering (ICDE), pages 14-26. IEEE. +Sebastian Zimmeck, Peter Story, Daniel Smullen, Abhilasha Ravichander, Ziqi Wang, Joel R Reidenberg, N Cameron Russell, and Norman Sadeh. 2019. Maps: Scaling privacy compliance analysis to a million apps. Proc. Priv. Enhancing Tech., 2019:66. + +
Sharing收集 (collect), 获取 (obtain), 接受 (get), 接收 (receive), 保存 (save), 使用 (use), 采集 (gather), 记录 (record), 存储 (store), 储存 (store)
Collection披露 (reveal), 分享 (share), 共享 (share), 交换 (exchange), 报告 (report), 公布 (public), 发送 (send), 交换 (exchange), 转移(transfer), 迁移 (migrate), 转让 (make over), 公开 (public), 透露 (disclose), 提供 (provide)
+ +Table 5: Data access word list + +# 9 Appendix + +# 9.1 Data access word list + +Table 5 gives data sharing and collection word list, that is summarized from laws (NISSTC, 2020; GDPR, 2016; PISS, 2020), app market requirements (Google, 2022b; Huawei, 2022b), and previous works (Yu et al., 2016; Andow et al., 2019, 2020). With those words, researchers can locate data access-related sentences and conduct further analysis to get interested entities, such as data controller, data entity, collection, sharing, condition, purpose and data receiver. + +# 9.2 Ethical Consideration + +CA4P-483 is a dataset constructed by gathering publicly available privacy policy websites without posing any ethical problems. First, privacy policies are publicly accessible in multi ways. According to application markets' requirements, developers or companies are asked to provide those privacy policy websites once they publish their apps. Privacy policies also ought to be given when the users use apps for the first time according to law requirements (PISS, 2020). Second, we do not collect any privacy information. Besides, the CA4P-483 is proposed to prompt research for protecting user privacy. + +For the annotations, we hired part-time research assistants from our university to label the dataset. They are compensated with 9 USD/hour and at most 17.5 hours per week. + +# 9.3 Prediction results analysis + +In this section, we show the prediction results of the algorithm and some common problems. These problems could be the limitations of existing models and also be challenges for designing algorithms + +![](images/8493296b7a3d9060e04c1efc7afff0ed83f4a408a832e0ceaf2af7993c02827a.jpg) +Figure 5: Overlapping between components. Differences between ground truth and prediction. + +![](images/302948e527f6b9cff79937872a4434790692b4b9388de77ccf202158e0de0c92.jpg) +Figure 6: The visualization of divergence between ground truth and prediction for missing Purpose. + +for our data scenario. + +Fig. 5 illustrates the scenario where there exist overlapping between components, i.e., the "basic registration or login information (基本注册或登录信息)". Exactly, "basic registration or login information" should be one data as is highlighted in Chinese version, i.e., the ground truth. However, the algorithm will prediction "basic registration or login (基本注册或登录)" as Purpose and "information(信息)" as Data as are highlighted in English version. The meaning of color for different categories can be refer to Figure 1. Fig. 6 shows the pre-trained algorithm may misclassify Purpose as Condition when data controller is the user. \ No newline at end of file diff --git a/afinegrainedchinesesoftwareprivacypolicydatasetforsequencelabelingandregulationcompliantidentification/images.zip b/afinegrainedchinesesoftwareprivacypolicydatasetforsequencelabelingandregulationcompliantidentification/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..77fe2e3dfc365f2fe5ed7e02ae7c44733691bd83 --- /dev/null +++ b/afinegrainedchinesesoftwareprivacypolicydatasetforsequencelabelingandregulationcompliantidentification/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c3ce60907e4752cf9cac7022d8fe7044e59b1562a7168197378c6f751c2e3510 +size 518653 diff --git a/afinegrainedchinesesoftwareprivacypolicydatasetforsequencelabelingandregulationcompliantidentification/layout.json b/afinegrainedchinesesoftwareprivacypolicydatasetforsequencelabelingandregulationcompliantidentification/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..22194884bfb4a4d951b9e5051941c3046369a220 --- /dev/null +++ b/afinegrainedchinesesoftwareprivacypolicydatasetforsequencelabelingandregulationcompliantidentification/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1716a0aa2b5cddab282079b4535520cffac7239fd979b26d6565889847acc899 +size 376951 diff --git a/aframeworkforadaptingpretrainedlanguagemodelstoknowledgegraphcompletion/3964ca02-a577-46ad-92bc-061876ab86d4_content_list.json b/aframeworkforadaptingpretrainedlanguagemodelstoknowledgegraphcompletion/3964ca02-a577-46ad-92bc-061876ab86d4_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..2d6ecf340236194c6ff5db0fa4cb99dfcf00ed57 --- /dev/null +++ b/aframeworkforadaptingpretrainedlanguagemodelstoknowledgegraphcompletion/3964ca02-a577-46ad-92bc-061876ab86d4_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d8c29322bd055b3696a98db1df37879b714003e377ace977bcb6b821dd34c42c +size 133698 diff --git a/aframeworkforadaptingpretrainedlanguagemodelstoknowledgegraphcompletion/3964ca02-a577-46ad-92bc-061876ab86d4_model.json b/aframeworkforadaptingpretrainedlanguagemodelstoknowledgegraphcompletion/3964ca02-a577-46ad-92bc-061876ab86d4_model.json new file mode 100644 index 0000000000000000000000000000000000000000..bbaec7bec7a9012b66f0045c0b923f1c8dee77de --- /dev/null +++ b/aframeworkforadaptingpretrainedlanguagemodelstoknowledgegraphcompletion/3964ca02-a577-46ad-92bc-061876ab86d4_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c1961ca99e6a2690c671f18b45f145328ff50d5aad8086c09f30642fa7117422 +size 154701 diff --git a/aframeworkforadaptingpretrainedlanguagemodelstoknowledgegraphcompletion/3964ca02-a577-46ad-92bc-061876ab86d4_origin.pdf b/aframeworkforadaptingpretrainedlanguagemodelstoknowledgegraphcompletion/3964ca02-a577-46ad-92bc-061876ab86d4_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..8f50e19977e1b5bfb3248109d3431baa81053d46 --- /dev/null +++ b/aframeworkforadaptingpretrainedlanguagemodelstoknowledgegraphcompletion/3964ca02-a577-46ad-92bc-061876ab86d4_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0c3e6cca16212f8f506dec4e58dbe2113cb92a9a44c8ac69d342d447c4bab32b +size 1668406 diff --git a/aframeworkforadaptingpretrainedlanguagemodelstoknowledgegraphcompletion/full.md b/aframeworkforadaptingpretrainedlanguagemodelstoknowledgegraphcompletion/full.md new file mode 100644 index 0000000000000000000000000000000000000000..0068d9e98c91ee523446b80ec64c07e4c8943ee9 --- /dev/null +++ b/aframeworkforadaptingpretrainedlanguagemodelstoknowledgegraphcompletion/full.md @@ -0,0 +1,515 @@ +# A Framework for Adapting Pre-Trained Language Models to Knowledge Graph Completion + +Justin Lovelace* + +Computer Science Department + +Cornell University + +j13353@cornell.edu + +Carolyn Penstein Rosé + +Language Technologies Institute + +Carnegie Mellon University + +cp3a@andrew.cmu.edu + +# Abstract + +Recent work has demonstrated that entity representations can be extracted from pre-trained language models to develop knowledge graph completion models that are more robust to the naturally occurring sparsity found in knowledge graphs. In this work, we conduct a comprehensive exploration of how to best extract and incorporate those embeddings into knowledge graph completion models. We explore the suitability of the extracted embeddings for direct use in entity ranking and introduce both unsupervised and supervised processing methods that can lead to improved downstream performance. We then introduce supervised embedding extraction methods that can extract more informative representations. We then synthesize our findings and develop a knowledge graph completion model that significantly outperforms recent neural models. + +# 1 Introduction + +Knowledge graphs (KG) are structured representations of knowledge that contain a collection of factual relations between entities. KGs are valuable resources with applications in different areas such as representation learning (Liu et al., 2018), question answering (Sun et al., 2019; Shen et al., 2019; Thirukovalluru et al., 2021), and entity linking (Thai et al., 2021). + +However, the difficulty of curating knowledge at scale means that existing KGs are highly incomplete. This has led to the widespread study of knowledge graph completion (KGC) which aims to develop automated solutions that can suggest new facts to add to the KG (Yang et al., 2015; Trouillon et al., 2016; Dettmers et al., 2018). KGC is typically formulated as ranking problem where an incomplete fact is used as a query to retrieve entities that complete the fact. + +Recent work has utilized pre-trained language models to develop approaches that are more robust to the naturally occurring sparsity within knowledge graphs. These approaches utilize textual entity descriptions to develop entity representations that are less reliant on graph connectivity. + +Such work either fine-tunes the language model directly during training to encode the entities (e.g. Yao et al. (2019)) or extracts a set of entity embeddings prior to training which can then be used to train a KGC model using standard training procedures (e.g. Lovelace et al. (2021)). + +While fine-tuning language models often improves downstream performance (Rogers et al., 2020), it increases the computational overhead of computing entity representations. As a result, standard KGC training procedures that involve evaluating a large number of negative candidates for each positive instance are typically infeasible. Sampling only a small set of negative candidates enables training, but can harm performance. + +Approaches that extract entity embeddings prior to training (Lovelace et al., 2021; Wang et al., 2021a) do not introduce any overhead for computing entity representations and are able to take advantage of standard training protocols. However, such approaches do not utilize any supervision to adapt the pre-trained language model to KGC. + +While both lines of previous work have demonstrated their approaches effectiveness at retrieving sparsely connected entities, they still lag behind KGC models that do not incorporate any textual information on standard benchmark datasets. + +In this work, we develop a framework for adapting pre-trained language models to KGC that takes advantage of the strengths of both prior lines of work. We accomplish this by decoupling the entity representations used for computing the query representation and the entity representations used for retrieval (see Figure 1). + +For candidate ranking, we extract and cache en + +tity representations from a pre-trained language model prior to training. We then introduce lightweight unsupervised and supervised embedding processing techniques that improve the suitability of the space for candidate retrieval without sacrificing the scalability necessary to use standard KGC training procedures. The embedding processing techniques introduced in this work lead to significant performance improvements across datasets from diverse domains. + +This decoupling also enables us to scalably fine-tune pre-trained language models to extract more informative entity representations for the query. However, naively fine-tuning the language model overfits the knowledge graph and actually degrades performance. We find that parameter-efficient fine-tuning methods such as prompt-tuning mitigate this and improve downstream performance. + +We synthesize our findings and utilize the most effective candidate representation processing and entity extraction techniques with a recently proposed neural ranking architecture. Although we do not make any modifications to the ranking architecture, our representation extraction and processing techniques lead to significant improvements across four diverse datasets. The findings and analysis from this work provide useful guidelines for developing and utilizing effective textual entity representations for KGC. + +The rest of our paper is organized as follows. We discuss related work in Section 2, present a formal description of our task in Section 3, and describe the datasets used in this work in Section 4. We introduce unsupervised and supervised techniques to improve the suitability of entity embeddings for candidate ranking in Section 5. We then introduce supervised methods to extract more informative representations for the query entity in Section 6 and explore the effect of language model selection in Section 7. Finally, we synthesize our findings in Section 8 and compare against recent work on our datasets. Our contributions are as follows. + +- We develop a novel framework for adapting pre-trained language models for KGC that significantly improves performance for both sparsely connected and widely studied benchmark datasets. +- We demonstrate that the embeddings extracted from pre-trained language models are suboptimal for entity ranking and introduce unsupervised and supervised processing techniques + +that transform the textual embedding space to be more suitable for candidate retrieval. + +- We demonstrate that parameter-efficient fine-tuning methods can be applied scalably to extract more informative query entity representations. + +# 2 Related Work + +Yao et al. (2019) adapted a pre-trained language model to KGC by fine-tuning it for triplet classification, i.e. predicting whether a given fact is true. However, such an approach scales poorly to the widely studied ranking formulation and is not competitive with simpler approaches. + +Follow-up work has developed more scalable frameworks utilizing siamese encoders to independently encode the query and candidate entities (Wang et al., 2021b; Li et al., 2022; Daza et al., 2021). While this is an improvement, it still cannot scale to the tens of thousands of negative candidates typically considered during training. Clouatre et al. (2021) take a different approach and adapt the MLM objective to perform candidate retrieval by aggregating the logits for a number of mask tokens, eliminating the need to directly encode negative candidate entities. Although these approaches generally improve upon Yao et al. (2019), they still lag behind simpler models on standard benchmarks. + +Malaviya et al. (2020); Lovelace et al. (2021); Wang et al. (2021a) have taken a different approach and extracted entity embeddings from pre-trained language models prior to training. This eliminates the overhead of computing entity representations during training, enabling the use of standard training procedures. The focus of this line of work has been on developing neural ranking architectures that can effectively utilize the extracted textual embeddings. We focus on the complementary questions of how to best extract and use entity representations with existing neural architectures. + +# 3 Task Formulation + +Given a set of entities $\mathcal{E}$ and relations $\mathcal{R}$ , a KG can be defined as a collection of entity-relation-entity triplets $\mathcal{K} = \{(e_i, r_j, e_k)\} \subset \mathcal{E} \times \mathcal{R} \times \mathcal{E}$ where $e_i, e_k \in \mathcal{E}$ and $r_j \in \mathcal{R}$ . The aim of KGC is to develop a model that accepts a query consisting of a head entity and a relation, $(e_i, r_j, ?)$ , and ranks all candidate entities $e_k \in \mathcal{E}$ to resolve the query. An effective KGC model should rank correct candidates more highly than incorrect candidates. + +![](images/caf3e9fd194fe3e9c1e654708c2ea98e5088619060e6b099bbe73580ff4f08a2.jpg) +Figure 1: Overview of our proposed framework. + +Neural KGC models embed the head entity and relation and compute a query vector $f_{\theta}(\mathbf{e_i}, \mathbf{r_j}) = \mathbf{q}$ where $f_{\theta}(\cdot)$ is a neural network and $\mathbf{e_i}, \mathbf{r_j}, \mathbf{q} \in \mathbb{R}^d$ . Scores for each candidate, $e_k \in \mathcal{E}$ , are computed as the inner product between the query vector and the candidate entity embedding $y_k = \mathbf{q}\mathbf{e}_k^\top$ where $\mathbf{e_k} \in \mathbb{R}^d$ . We follow Lovelace et al. (2021) and use textual descriptors to extract the entity embeddings from pre-trained language models while learning relation embeddings during training. + +We evaluate the KGC models with standard ranking metrics: Mean Reciprocal Rank (MRR), Hits at 1 (H@1), Hits at 3 (H@3), and Hits at 10 (H@10). We follow standard procedure and consider both forward and reverse relations and use the filtered evaluation setting (Dettmers et al., 2018). We validate the significance of improvements in the MRR with the paired bootstrap significance testing (Berg-Kirkpatrick et al., 2012) and correct for multiple hypothesis testing with the Benjamini/Hochberg method (Benjamini and Hochberg, 1995). + +# 4 Datasets + +We work with KGC datasets that cover diverse domains such as commonsense, biomedical, and encyclopedic knowledge. For the commonsense KG dataset, we work with the CN-82K dataset introduced by (Wang et al., 2021a) which is derived from ConceptNet. For the biomedical KGC dataset, we work with the SNOMED-CT Core dataset introduced by Lovelace et al. (2021). For the encyclopedic dataset, we utilize the widely used benchmark KGC dataset, FB15k-237 (Toutanova and Chen, 2015). We additionally utilize the widely studied WN18RR (Dettmers et al., 2018) dataset which is derived from WordNet. Dataset statistics are reported in the appendix in Table 7. + +# 5 Candidate Retrieval + +Mu and Viswanath (2018); Ethayarajh (2019); Li et al. (2020) have observed that textual embedding spaces tend to be highly anisotropic, i.e. most vectors occupy a narrow cone within the space, which limits their expressiveness. Furthermore, approaches that improve the isotropy, i.e. the uniformity with respect to direction, of the embedding space lead to significant improvements on semantic similarity benchmarks (Mu and Viswanath, 2018; Li et al., 2020; Gao et al., 2021). Given that entity ranking relies upon a similar scoring mechanism, the existing embedding space may be similarly suboptimal for candidate retrieval. + +# 5.1 Embedding Quality Metrics + +We measure two primary aspects of the embedding space to analyze the effect of different processing techniques: the anisotropy of the space and the alignment of the space with the knowledge contained within the graph. We note that these aspects correspond to the notions of uniformity and alignment from work in constrastive learning (Wang and Isola, 2020; Gao et al., 2021). + +# 5.1.1 Effective Dimension + +We utilize a measure of anisotropy introduced by Cai et al. (2021) called the $\epsilon$ -effective-dimension. We first apply PCA to the matrix of entity embeddings. The ratio of the variance explained by $k$ principal components can then be calculated as $r_k = \sum_{i=0}^{k-1} \sigma_i / \sum_{j=0}^{m-1} \sigma_j$ , where $\sigma_i$ is the $i$ -th largest eigenvalue of the covariance matrix of the embeddings. The $\epsilon$ -effective-dimension is then $d(\epsilon) = \operatorname{argmin}_k r_k \geq \epsilon$ . We set $\epsilon = 0.8$ , which means that we measure the minimum number of PCA components necessary to explain $80\%$ of the variance in the embedding space. + +# 5.1.2 Knowledge Alignment + +For some set of facts $\{(e_i,r_j,e_k)\}_{k = 1}^n$ we would expect $\{e_k\}_{k = 1}^n$ to be similar in some way. For example, all entities that satisfy the query (abdomen, finding_site_of,?) are abdominal conditions. The inner product scoring means that this similarity should be encoded within the entity embedding space to enable retrieving the set of correct entities with a single query vector. + +To evaluate the alignment of the embedding space and the KG, we define the similarity between two entities as + +$$ +\operatorname {S i m} (e _ {i}, e _ {j}) = \sum_ {e _ {k} \in \mathcal {E}, r _ {l} \in \mathcal {R}} \mathbb {1} (e _ {k}, r _ {l}, e _ {i}) \times \mathbb {1} (e _ {k}, r _ {l}, e _ {j}) +$$ + +where $\mathcal{E}$ is the set of entities, $\mathcal{R}$ is the set of relations, and $\mathbb{1}(e_k,r_l,e_i)$ evaluates to one if the fact is contained within the KG and zero otherwise. We report the knowledge alignment as the Spearman's rank correlation, $\rho$ , between our KG-induced measure of similarity and the inner product between centered entity embeddings. + +# 5.1.3 Lexical Alignment + +As a complementary measure to knowledge alignment, we also measure the lexical alignment of the embedding space by calculating the Spearman's rank correlation, $\rho$ , between the Jaccard Similarity of the entity descriptions and the inner product between centered entity embeddings. + +# 5.2 Embedding Processing Techniques + +# 5.2.1 Unsupervised Techniques + +Normalization As a simple baseline, we normalize each entity embedding, $\mathbf{e_i} \in \mathbb{R}^d$ , by centering the embedding space and scaling each vector to unit norm as $\tilde{\mathbf{e}}_{\mathbf{i}} = \frac{\mathbf{e}_{\mathbf{i}} - c}{\|\mathbf{e}_{\mathbf{i}} - c\|_2}$ where $c \in \mathbb{R}^d$ is the mean of the entity embeddings. + +Normalizing Flow We learn a normalizing flow to transform the anisotropic embedding space to an isotropic space, similar to Li et al. (2020). We briefly introduce normalizing flows, but we refer the reader to Papamakarios et al. (2021) for a comprehensive overview. + +Normalizing flows can be used to transform a distribution into a known probability distribution. Given $\mathbf{x} \in \mathbb{R}^d$ with an unknown true distribution $\mathbf{x} \sim p_x^*(\mathbf{x})$ , we can define a joint distribution over $\mathbf{x}$ following the generative process of $\mathbf{x} = T(\mathbf{u}), \mathbf{u} \sim p_u(\mathbf{u})$ where $p_u(\mathbf{u})$ is the base probability distribution of the flow model. + +![](images/ff5b2e688e2f49ff1400e8eac7c78df1363469b3e19dd1f5b6b5e6565f2a8dde.jpg) +Figure 2: Intrinsic evaluation of embedding processing techniques. We note the MRR for each approach in parenthesis. + +Normalizing flows constrain the transformation, $T$ , to be a diffeomorphism which allows us to write the density of $\mathbf{x}$ in terms of $p_u(\mathbf{u})$ and the Jacobian determinant of $T^{-1}$ as $p_x(\mathbf{x}) = p_u(T^{-1}(\mathbf{x}))|\operatorname*{det}(J_{T^{-1}}(\mathbf{x}))|$ . We can then fit the flow by minimizing the negative log-likelihood of observed samples $\{\mathbf{x}_n\}_{n=1}^N$ as + +$$ +\begin{array}{l} - \log (p _ {x} (\mathbf {x} _ {\mathbf {i}})) = \\ - \log (p _ {u} (T ^ {- 1} (\mathbf {x} _ {\mathbf {i}}))) - \log | \det (J _ {T ^ {- 1}} (\mathbf {x} _ {\mathbf {i}})) | \\ \end{array} +$$ + +We define $T^{-1}(\mathbf{x}) = \mathbf{W}\mathbf{x} + \mathbf{b}$ where $\mathbf{W} \in \mathbb{R}^{d \times d}$ and $\mathbf{x}, \mathbf{b} \in \mathbb{R}^d$ . To ensure the invertibility of $\mathbf{W}$ and to simplify the computation of the Jacobian determinant, we parameterize $\mathbf{W}$ using its LU decomposition (Kingma and Dhariwal, 2018). We select a multivariate Gaussian centered on the origin with identity covariance for the base distribution. Thus, the normalizing flow learns to map the embedding space to an isotropic Gaussian. + +# 5.2.2 Supervised Techniques + +We explore two inexpensive supervised techniques that learn to transform the embedding space. For both techniques, we preprocess the set of entity embeddings by centering and scaling them to have unit norm prior to the transformation. + +MLP We consider an MLP with one hidden layer followed by normalization. Thus, a processed entity embedding, $\mathbf{e_i}$ , is transformed as $\tilde{\mathbf{e}}_{\mathrm{i}} = \frac{MLP(\mathbf{e}_{\mathrm{i}})}{\|MLP(\mathbf{e}_{\mathrm{i}})\|_2}$ . + +Residual MLP We consider an MLP that uses a residual connection with the original embedding. A processed entity embedding, $\mathbf{e_i}$ , would then be transformed as $\tilde{\mathbf{e}}_{\mathrm{i}} = \frac{(\mathbf{e}_{\mathrm{i}} + MLP(\mathbf{e}_{\mathrm{i}}))}{\|(\mathbf{e}_{\mathrm{i}} + MLP(\mathbf{e}_{\mathrm{i}}))\|_2}$ . + +
SNOMED CT CoreCN-82KFB15k-237WN18RR
MRRH@1H@3H@10MRRH@1H@3H@10MRRH@1H@3H@10MRRH@1H@3H@10
Default Embeddings.488.383.543.689.190.127.208.314.339.259.370.500.575.503.606.716
Normalization.487.381.544.692.192.128.211.317.348***.264.381.514.576.501.608.726
Normalizing Flow.508***.401.566.713.194**.129.213.320.352***.265.385.527.580*.509.607.721
MLP.539***†.431.598.749.200***†.132.222.339.374***†.282.407.561.583**.510.613.730
Residual MLP.549***†.445.507.752.209***†.138.230.350.375***†.283.408.564.591***†.518.616.735
+ +Table 1: Comparison of candidate transformation techniques. The highest metrics for unsupervised and supervised techniques are bolded. We indicate a significant improvement over the default embeddings with *, **, ***(p < 0.05, 0.005, 5e-5) and over the normalizing flow with † (p < 5e-5). + +
SNOMED CT CoreCN-82KFB15k-237WN18RR
MRRH@1H@3H@10MRRH@1H@3H@10MRRH@1H@3H@10MRRH@1H@3H@10
CLS Token.472.371.521.671.157.104.171.259.351.266.383.525.549.488.567.675
+ Pretraining.489*.385.540.695.189*.126.207.314.356*.270.388.530.587*.515.618.732
Mean Pooling.503.397.559.705.184.124.202.303.352.266.385.525.577.508.603.719
+ Pretraining.509*.403.566.713.195*.130.216.323.352.265.385.527.580.509.607.721
+ +Table 2: Ablation of embedding extraction techniques. We indicate significant improvements from the pretraining procedure with $* (p < 5\mathrm{e} - 5)$ . + +# 5.3 Experiments + +We evaluated the different embedding processing methods using the textual entity embeddings released by Lovelace et al. (2021) $^{2}$ . We also utilize BERT-ResNet with the default hyperparameters from Lovelace et al. (2021) as our neural ranking architecture, $f_{\theta}(\cdot ,\cdot)$ . We only apply the transformation, $g_{\theta}(\mathbf{e_k}) = \tilde{\mathbf{e}}_{\mathbf{k}}$ where $\tilde{\mathbf{e}}_{\mathbf{k}}\in \mathbb{R}^{d}$ , to the embedding matrix used for candidate ranking. Therefore, we compute the score as $y_{k} = f_{\theta}(\mathbf{e_i},\mathbf{r_j})g_{\theta}(\mathbf{e_k})^{\top}$ . + +# 5.4 Impact Of Embedding Space Transformations + +We report the effect of the different transformations on downstream performance in Table 1 and display the intrinsic embedding metrics for WN18RR in Figure 2. Figures for the other datasets are presented in the appendix and show similar findings. + +The normalization baseline is generally ineffective, which is consistent with its limited effect on the embedding metrics. The normalizing flow greatly increases the effective dimensionality but decreases the knowledge alignment of the space. This suggests that there may be a trade-off between isotropy and alignment of the space, which is consistent with observations from work in contrastive learning (Gao et al., 2021). Despite that tradeoff, optimizing solely for isotropy significantly improves performance across all datasets, confirming that the anisotropy of the original space hurts performance. + +For the supervised techniques, the MLP and + +![](images/3b3b736d52d01c6afc27da88fa291aac56e9325a03833a3f6fffe2e3c4d19891.jpg) +Figure 3: Effect Of Residual MLP on knowledge and lexical alignment. + +Residual MLP lead to significantly improved performance, with the Residual MLP consistently outperforming the MLP. Both transformations consistently improve the knowledge alignment of the embedding spaces. Compared to the MLP, the Residual MLP produces a more isotropic space. Given its strong performance, the Residual MLP seems to best balance the trade-off between the knowledge alignment and isotropy of the embeddings. + +We contrast the effect of the Residual MLP on knowledge and lexical alignment in Figure 3. The Residual MLP strengthens the KG alignment while reducing the lexical alignment across all datasets, demonstrating that it learns to emphasize relevant information while discarding spurious information. + +# 5.5 Embedding Extraction Ablation + +For this ablation, we used the most effective unsupervised processing technique, the normalizing flow, for candidate ranking. We ablate the efficacy of the following embedding extraction choices. + +[CLS] Token: We extract the embedding of the + +
SNOMED CT CoreCN-82KFB15k-237WN18RR
MRRH@1H@3H@10MRRH@1H@3H@10MRRH@1H@3H@10MRRH@1H@3H@10
Unsupervised Extraction.509.403.566.713.195.130.216.323.356.270.388.530.587.515.618.732
Finetuning.496.386.555.709.186.124.203.307.347.260.379.522.579.509.606.721
Linear Probe.516†††.408.575.722.195.130.215.324.358†.272.392.530.598††.524.630.746
Prompt-tuning.515†††.410.573.719.201†††.136.222.333.357.271.392.528.597††.523.630.744
+ +Table 3: Comparison of query entity extraction techniques. We indicate significant improvements over the best unsupervised approach with $\dagger$ , $\dagger\ddagger$ , $\dagger\ddagger\ddagger$ ( $p < .05$ , $5\mathrm{e}-4$ , $5\mathrm{e}-5$ ). + +[CLS] token from the final layer following prior work (Malaviya et al., 2020; Wang et al., 2021a). + +Mean Pooling: We mean pool across all tokens and layers following Lovelace et al. (2021). + +MLM Pretraining: Recent work (Malaviya et al., 2020; Wang et al., 2021a; Lovelace et al., 2021) has pretrained the language model using the MLM objective on the set of entity names. We ablate the impact of this choice. + +We report the KGC metrics in Table 2. The MLM pretraining often results in significant improvements in downstream performance. The optimal unsupervised extraction technique varies based on the dataset, with mean-pooling being most effective for the SNOMED CT Core dataset and the CN82K dataset while the [CLS] embedding is most effective for the other two datasets. However, we observe that mean pooling after MLM pre-training is reasonably effective across all datasets. + +# 6 Query Entity Extraction + +We explore supervised techniques to extract more informative representations from pre-trained language models for the query entity. + +Fine-tuning: We fine-tune the language model during training and extract the entity representation by mean pooling across the intermediate states in each layer and aggregating across layers with a learned linear combination. + +Linear Probe: We freeze the language model and apply a learned linear projection (Toshniwal et al., 2020) to every hidden state of the model. We then max-pool across the tokens in each layer to produce a single feature vector for every layer. We aggregate these features using a learned linear combination across layers. + +Prompt-tuning We learn continuous prompts that we prepend to the language model inputs at every layer to prompt the frozen model (Li and Liang, 2021). We extract entity representations by mean pooling across intermediate states in each layer and aggregate across layers with a learned linear combination. + +![](images/8fd3fae1ae6ef500366fe8387cca4cf76216d3716c49f6fc475f81dd0abfe747.jpg) +Figure 4: Effect of supervised extraction techniques compared to the unsupervised baseline. Error bars indicate $95\%$ confidence intervals. + +# 6.1 Experiments + +To isolate the effect of the query embedding extraction technique, we use the normalizing flow for candidate ranking with the most effective embeddings from our prior ablation for each dataset. + +The supervised extraction techniques introduce an additional function, $h_{\theta}(e_i) = \hat{\mathbf{e}}_i$ where $\hat{\mathbf{e}}_i \in \mathbb{R}^d$ , to extract entity representations for computing the query $f_{\theta}(\hat{\mathbf{e}}_i, \mathbf{r}_j) = \hat{\mathbf{q}}$ . Therefore, the score is computed as $y_k = f_{\theta}(h_{\theta}(e_i), \mathbf{r}_j) g_{\theta}(\mathbf{e}_k)^{\top}$ . + +# 6.1.1 Impact of Embedding Extraction Techniques + +We report the KGC metrics in Table 3. Fine-tuning the language model during training actually degrades performance across all datasets, although it does minimize the training loss more effectively than other approaches. We break down the effect of different techniques in Figure 4 by the connectivity of the query entity for the WN18RR dataset. We observe that the performance degradation is more pronounced for queries with lower connectivity although this degradation doesn't extend to unseen query entities. This suggests that fine-tuning the language model leads to overfitting for entities with limited information. The figures for the other datasets show similar trends and are presented in the appendix. + +
SNOMED CT CoreCN-82KFB15k-237WN18RR
MRRH@1H@3H@10MRRH@1H@3H@10MRRH@1H@3H@10MRRH@1H@3H@10
Unsupervised Embedding Extraction & Residual MLP
BERT-base.531.425.588.736.210.139.232.352.373.282.406.559.590.518.616.735
BERT-large.545**.441.601.749.212.139.234.356.375.282.410.563.597*.524.624.743
PubMedBERT.549†.444.606.754------------
Prompt-tuning & Residual MLP
BERT-base.530.423.587.736.214††.142.237.361.376†.284.410.562.599†.525.632.749
BERT-large.541**.434.599.749.216††.144.238.361.373.280.409.561.608**††.538.636.751
PubMedBERT.550†.443.611.755------------
+ +Table 4: Effect of language model selection. We indicate significant improvements from the larger language model with *, * * (p < .05, 5e-5); from prompting with †, † † (p < 0.05, .005); and from specialization with ‡ (p < 5e-5). + +The parameter-efficient supervised techniques do, however, lead to significantly improved performance across all datasets, although there is not a clear winner between them. These techniques mitigate the overfitting problem while enabling beneficial adaptation to the downstream task. Figure 4 shows that the benefits of supervision are greatest for sparsely connected query entities. For densely connected query entities, the impact is generally negligible, potentially because the graph already contains sufficient information about the entity. + +We note that sparsely connected entities were filtered out of the FB15k-237 KG during the curation of the dataset, producing an artificially dense KG (Lovelace et al., 2021). This artificial density limits the benefit of techniques which improve performance for sparsely connected entities. Therefore, our analysis also explains the limited topline improvements for the FB15k-237 dataset. + +# 7 Effect of Language Model Selection + +Further performance improvements can often be gained by scaling up the size of the language model Devlin et al. (2019) or from using specialized, domain-specific language models Gu et al. (2020). In this section, we examine the effect of these two aspects on downstream KGC performance. + +We conduct experiments with both unsupervised and supervised query entity extraction techniques while using our best candidate ranking approach, the Residual MLP. We conduct experiments with BERT-base-uncased and BERT-large-uncased for all three KGs. To evaluate the effect of specialization, we use PubMedBERT, which is the same size as BERT-base, for SNOMED-CT Core. + +We report the results of these experiments in Table 4. When using unsupervised extraction techniques, the larger language model consistently improves performance, but the differences can be minor. For the supervised extraction techniques, the + +larger language model actually degrades performance over the unsupervised extraction techniques in some cases. The effect of using supervision for extracting the query entity is dataset-dependent and is helpful for CN82K and WN18RR. + +The supervised extraction and larger language models do lead to lower training loss, but that improvement does not consistently translate to stonger test performance. Thus, the mixed results likely arise from overfitting which could potentially be mitigated with careful regularization. Domain-specific pretraining is particularly effective, with PubMedBERT consistently outperforming other models. + +# 8 Comparison Against Recent Work + +We synthesize our findings to develop a KGC model and compare against recent work. We again simply repurpose the BERT-ResNet ranking architecture with the default hyperparameters from Lovelace et al. (2021) to demonstrate the impact of the decisions explored in this work. + +We report results across the two sparser datasets in Table 5. Our embedding extraction and processing techniques outperform recent work, with the supervised techniques being particularly effective. In Table 5 we also compare against a selection of baselines on the FB15K-237 and WN18RR datasets. We also denote whether the models utilize additional graph information or textual information. + +Our KGC model is very effective and outperforms the models that do not incorporate any additional information. Although this seems natural, this was actually not the case with previous work. Therefore, our method integrates textual information in a way that leads to competitive performance even for these widely studied benchmark datasets. + +
SNOMED CT CoreCN-82KAdditional Information
MRRH@1H@3H@10MRRH@1H@3H@10Text
DistMult (Yang et al., 2015).293.226.318.426.0280-.0290.0560X
ComplEx (Trouillon et al., 2016).302.224.332.456.0260-.0270.0500X
ConvE (Dettmers et al., 2018).271.191.303.429.0801-.0867.1313X
BERT-ConvTransE (Malaviya et al., 2020)----.1626-.1795.2751
Inductive (Wang et al., 2021a)----.2035-.2265.3386
BERT-DeepConv (Lovelace et al., 2021).479.374.532.685----
BERT-ResNet (Lovelace et al., 2021).492.389.544.694.190.127.208.318
BERT-ResNet + Normalizing Flow.509.403.566.713.195.130.216.323
BERT-ResNet + Prompt-tuning + Normalizing Flow.515.410.573.719.201.136.222.333
BERT-ResNet + Residual MLP.549.444.606.754.212.139.234.356
BERT-ResNet + Prompt-tuning + Residual MLP.550.443.611.755.216.144.238.361
+ +
FB15K-237WN18RRAdditional Information
MRRH@1H@3H@10MRRH@1H@3H@10Graph StructureText
RESCAL† (Nickel et al., 2011).357--.541.467--.517XX
TransE† (Bordes et al., 2013).313--.497.228--.520XX
DistMult† (Yang et al., 2015).343--.531.452--.531XX
ComplEx† (Trouillon et al., 2016).348--.536.475--.547XX
ConvE† (Dettmers et al., 2018).339--.521.442--.504XX
CompGCN (Vashishth et al., 2020).355.264.390.535.479.443.494.546X
HittER (Chen et al., 2021).373.279.409.558.503.462.516.584X
KG-BERT‡ (Yao et al., 2019).236.145.258.420.242.110.280.524X
BERT-TransE (Daza et al., 2021).235.150.253.411.325.144.431.679X
MLMLM (Clouatre et al., 2021).259.187.282.403.502.439.542.611X
StAR (Wang et al., 2021b).296.205.322.482.401.243.491.709X
LP-BERT (Li et al., 2022).310.223.336.490.482.343.563.752X
BERT-ResNet (Lovelace et al., 2021).346.262.379.514.575.503.606.716X
BERT-ResNet + Normalizing Flow.356.270.388.530.587.515.618.732X
BERT-ResNet + Prompt-tuning + Normalizing Flow.357.271.392.528.599.527.630.743X
BERT-ResNet + Residual MLP.375.282.410.563.597.524.624.743X
BERT-ResNet + Prompt-tuning + Residual MLP.376.284.410.562.608.538.636.751X
+ +Table 5: Comparison against baselines and recent work. We indicate that the results are from Ruffinelli et al. (2020) with a † and from the work by Daza et al. (2021) with a ‡. The baselines for SNOMED CT Core and CN82K are taken from Lovelace et al. (2021) and Wang et al. (2021a) respectively, except for the BERT-ResNet result for CN82K which is from our implementation. The WN18RR result for BERT-ResNet is also from our implementation. Other results are taken from the original work. Dashes indicate that the metric was not reported by the prior work. + +# 8.1 Complementarity of Textual Approach + +To evaluate the complementarity of textual and non-textual approaches, we train a transformer model similarly to Chen et al. (2021). We refer the reader to the appendix for full details regarding this model. We then ensemble this model with our most effective model from Table 5, computing candidate scores as a convex combination of the two sets of scores. We tune the ensemble weight with the validation set. We also explore using an independent weight for each relation. As a baseline comparison, we ensemble our best configuration across two random seeds. + +We report the results of this experiment in Table 6. We observe that ensembling is consistently effective, particularly the relation-specific ensembling. On the WN18RR dataset where the textual approach is already highly effective, ensembling the different approaches does not outperform the self-ensemble. However, for the FB15k-237 dataset where the performance of the different approaches is closer, ensembling the textual and non + +
FB15K-237WN18RR
MRRH@1H@3H@10MRRH@1H@3H@10
Transformer.367.272.404.554.486.446.503.564
Our Framework.376.284.410.562.608.538.636.751
Alt. Seed.377.285.412.564.605.533.634.749
Simple Ensemble
Self-Ensemble.384***.292.420.570.613***.540.641.760
Transformer Ensemble.388****††‡.295.425.576.609*.539.638.755
Relation-Specific Ensemble
Self-Ensemble.391***.303.424.571.616****‡‡.544.642.758
Transformer Ensemble.400****††‡‡.310.435.582.612*‡.543.640.756
+ +Table 6: Ensembling Results. We indicate significant improvements over our framework with *,**, ***(p < .05, 5e-4, 5e-5); from the transformer ensemble with †(p < 5e-5); and from relation-specific ensembling with ‡, ‡‡, ‡‡‡ (p < .005, 5e-4, 5e-5). + +textual models does meaningfully improve performance over the self-ensemble. This demonstrates that textual approaches can complement existing methods. + +# 9 Conclusion + +We present a framework for adapting pre-trained language models for KGC. The key insight driving the development of our framework was that decoupling the entity representations used for computing + +the query representation and the entity representations used for candidate retrieval enabled us to better integrate the information from pre-trained language models while maintaining the scalability necessary to train performant KGC models. + +We introduced unsupervised and supervised techniques to improve the suitability of entity embeddings for candidate ranking (Section 5), introduced methods to extract entity embeddings from language models (Section 6), and explored the effect of language model selection (Section 7). + +By synthesizing the insights from our exploration, we developed a KGC model that significantly outperforms recent work while simply repurposing an existing ranking architecture. While innovations in neural ranking architecture have been valuable, our work demonstrates the importance of developing more informative entity representations. The findings and analysis from this work provide a useful framework for adapting pre-trained language models for knowledge graph completion. + +# 10 Limitations + +# 10.1 Training Overhead + +We report and discuss the number of trainable parameters and training times across our different configurations in detail in the appendix $^{3}$ . We present the main takeaways in this section. + +The supervised techniques like the Residual MLP and prompt-tuning introduce additional parameters and can increase the training time compared to the BERT-ResNet baseline. However, both the Residual MLP and prompt-tuning are very parameter-efficient. When utilizing BERT-base, the Residual MLP increases the number of trainable parameters by $3.6\%$ and prompt-tuning increases it by $1.2\%$ . The increases are similar when utilizing BERT-large ( $3.6\%$ and $1.1\%$ respectively). Directly fine-tuning BERT-base, for comparison, increases the number of trainable parameters by $331.2\%$ . + +The residual MLP, while lightweight, does increase the training time per iteration. For the candidate transformation experiment on the WN18RR dataset (Section 5), the baseline completes one epoch in 3m56s while the Residual MLP increases this to 5m44s. However, the Residual MLP also accelerates convergence, offsetting the per-iteration slowdown. Although it takes a similar amount of + +time to train the baseline for 6 epochs as it does to train the Residual MLP model for 4 epochs, the Residual MLP actually outperforms the baseline at that time despite training for fewer iterations. + +Therefore, the baseline is only more effective in the earliest stage of training before being surpassed by the Residual MLP model. For the WN18RR dataset, this breakeven point occurs within only $29\mathrm{m}$ of training. This trend holds across all datasets, with the worst breakeven point being only $1\mathrm{h}43\mathrm{m}$ . Therefore the accelerated convergence offsets the increased per-iteration cost for all but the shortest of training times. + +Techniques such as prompt-tuning require the application of a language model, which increases the time per iteration. For the query extraction experiment on the WN18RR dataset (Section 6), the baseline completes one epoch in $3\mathrm{m}54\mathrm{s}$ while prompt-tuning increases this to $8\mathrm{m}47\mathrm{s}$ . When controlling for wall clock time, we observe a similar trend where the baseline is more effective early in training before being surpassed by prompt-tuning. However, the breakeven point occurs much later (e.g. at $14\mathrm{h}1\mathrm{m}$ for WN18RR). Therefore, in settings with limited training budgets, the performance improvement from prompt-tuning may not justify the additional training cost. + +We note that none of the techniques explored in our work introduce any overhead at inference time. After training, the improved entity representations from the Residual MLP or prompt-tuning can be computed and cached for inference, reducing the cost of computing entity embeddings to a simple lookup like the original BERT-ResNet model. + +# 10.2 Availability of Textual Descriptions + +The integration of pre-trained language models to improve KG entity representations is predicated upon the existence of informative textual descriptions for the entities within the graph. Although this assumption holds in many scenarios, it does not hold universally. For instance, clinical data from the Electronic Health Record can naturally be represented as a knowledge graph for applications such as question answering (Park et al., 2021). + +Entities like medications and procedures would have well-defined names, but others such as those representing specific admissions events or hospital stays would be represented with a numerical ID and would not have natural textual representations. Although a hybrid approach that adaptively inte + +grates textual information when available would likely be beneficial, the extension of our framework to such settings is left for future work. + +# 11 Ethical Considerations + +Knowledge graphs are valuable resources utilized by applications such as search engines (Sullivan, 2020) and automated voice assistants (Flint, 2021) to present information to users. While KGC models have the potential to improve the coverage of such resources, they also risk introducing inaccurate facts that could mislead users. The cost of such inaccuracies can vary significantly based on the information domain (e.g. film trivia vs. medical information). + +Therefore, such tools should not be deployed without careful consideration of the potential harms or the development of appropriate mitigation strategies. One way to minimize such risks is to use KGC methods to accelerate the curation of likely candidate facts that must undergo further verification before their inclusion in the knowledge graph. + +# Acknowledgments + +This research was funded in part by NSF grant IIS 1917955. + +# References + +Yoav Benjamini and Yosef Hochberg. 1995. Controlling the false discovery rate - a practical and powerful approach to multiple testing. J. Royal Statist. Soc., Series B, 57:289 - 300. +Taylor Berg-Kirkpatrick, David Burkett, and Dan Klein. 2012. An empirical investigation of statistical significance in NLP. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 995-1005, Jeju Island, Korea. Association for Computational Linguistics. +Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data. In Advances in neural information processing systems, pages 2787-2795. +Xingyu Cai, Jiaji Huang, Yuchen Bian, and Kenneth Church. 2021. Isotropy in the contextual embedding space: Clusters and manifolds. In International Conference on Learning Representations. +Sanxing Chen, Xiaodong Liu, Jianfeng Gao, Jian Jiao, Ruofei Zhang, and Yangfeng Ji. 2021. HittER: Hierarchical transformers for knowledge graph embeddings. In Proceedings of the 2021 Conference on + +Empirical Methods in Natural Language Processing, pages 10395-10407, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. +Louis Clouatre, Philippe Trempe, Amal Zouaq, and Sarath Chandar. 2021. MLMLM: Link prediction with mean likelihood masked language model. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP* 2021, pages 4321-4331, Online. Association for Computational Linguistics. +Daniel Daza, Michael Cochez, and Paul Groth. 2021. Inductive entity representations from text via link prediction. In Proceedings of the Web Conference 2021, WWW '21, page 798-808, New York, NY, USA. Association for Computing Machinery. +Tim Dettmers, Minervini Pasquale, Stenetorp Pontus, and Sebastian Riedel. 2018. Convolutional 2d knowledge graph embeddings. In Proceedings of the 32th AAAI Conference on Artificial Intelligence, pages 1811-1818. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. +Kawin Ethayarajh. 2019. How contextual are contextualized word representations? Comparing the geometry of BERT, ELMo, and GPT-2 embeddings. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 55–65, Hong Kong, China. Association for Computational Linguistics. +Emma Flint. 2021. Alexa entities launches to general availability. +Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. SimCSE: Simple contrastive learning of sentence embeddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6894-6910, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. +Yu Gu, Robert Tinn, Hao Cheng, Michael Lucas, Naoto Usuyama, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, and Hoifung Poon. 2020. Domain-specific language model pretraining for biomedical natural language processing. +Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. + +Durk P Kingma and Prafulla Dhariwal. 2018. Glow: Generative flow with invertible $1 \times 1$ convolutions. In Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc. +Bohan Li, Hao Zhou, Junxian He, Mingxuan Wang, Yiming Yang, and Lei Li. 2020. On the sentence embeddings from pre-trained language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9119-9130, Online. Association for Computational Linguistics. +Da Li, Ming Yi, and Yukai He. 2022. LP-BERT: multitask pre-training knowledge graph BERT for link prediction. CoRR, abs/2201.04843. +Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. +Zhenghao Liu, Chenyan Xiong, Maosong Sun, and Zhiyuan Liu. 2018. Entity-duet neural ranking: Understanding the role of knowledge graph semantics in neural information retrieval. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2395-2405, Melbourne, Australia. Association for Computational Linguistics. +Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In International Conference on Learning Representations. +Justin Lovelace, Denis Newman-Griffis, Shikhar Vashishth, Jill Fain Lehman, and Carolyn Rose. 2021. Robust knowledge graph completion with stacked convolutions and a student re-ranking network. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1016-1029, Online. Association for Computational Linguistics. +Chaitanya Malaviya, Chandra Bhagavatula, Antoine Bosselut, and Yejin Choi. 2020. Commonsense knowledge base completion with structural and semantic context. Proceedings of the 34th AAAI Conference on Artificial Intelligence. +Jiaqi Mu and Pramod Viswanath. 2018. All-but-the-top: Simple and effective postprocessing for word representations. In International Conference on Learning Representations. +Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. 2011. A three-way model for collective learning on multi-relational data. In Proceedings of the 28th International Conference on International Conference on Machine Learning, ICML'11, page 809-816, Madison, WI, USA. Omnipress. +George Papamakarios, Eric Nalisnick, Danilo Jimenez Rezende, Shakir Mohamed, and Balaji Lakshminarayanan. 2021. Normalizing flows for probabilistic modeling and inference. Journal of Machine Learning Research, 22(57):1-64. + +Junwoo Park, Youngwoo Cho, Haneol Lee, Jaegul Choo, and E. Choi. 2021. Knowledge graph-based question answering with electronic health records. In MLHC. +Anna Rogers, Olga Kovaleva, and Anna Rumshisky. 2020. A primer in BERTology: What we know about how BERT works. Transactions of the Association for Computational Linguistics, 8:842-866. +Daniel Ruffinelli, Samuel Broscheit, and Rainer Gemulla. 2020. You can teach an old dog new tricks! on training knowledge graph embeddings. In International Conference on Learning Representations. +Tao Shen, Xiubo Geng, Tao Qin, Daya Guo, Duyu Tang, Nan Duan, Guodong Long, and Daxin Jiang. 2019. Multi-task learning for conversational question answering over a large-scale knowledge base. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2442-2451, Hong Kong, China. Association for Computational Linguistics. +Danny Sullivan. 2020. A reintroduction to our knowledge graph and knowledge panels. +Haitian Sun, Tania Bedrax-Weiss, and William Cohen. 2019. PullNet: Open domain question answering with iterative retrieval on knowledge bases and text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2380-2390, Hong Kong, China. Association for Computational Linguistics. +Dung Thai, Raghuveer Thirukovalluru, Trapit Bansal, and Andrew McCallum. 2021. Simultaneously self-attending to text and entities for knowledge-informed text representations. In Proceedings of the 6th Workshop on Representation Learning for NLP (RepL4NLP-2021), pages 241-247, Online. Association for Computational Linguistics. +Raghuveer Thirukovalluru, Mukund Sridhar, Dung Thai, Shruti Chanumolu, Nicholas Monath, Sankaranarayanan Ananthakrishnan, and Andrew McCallum. 2021. Knowledge informed semantic parsing for conversational question answering. In Proceedings of the 6th Workshop on Representation Learning for NLP (RepL4NLP-2021), pages 231-240, Online. Association for Computational Linguistics. +J. Thompson, R. Goroshin, A. Jain, Y. LeCun, and C. Bregler. 2015. Efficient object localization using convolutional networks. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 648-656. +Shubham Toshniwal, Haoyue Shi, Bowen Shi, Lingyu Gao, Karen Livescu, and Kevin Gimpel. 2020. A cross-task analysis of text span representations. In Proceedings of the 5th Workshop on Representation + +Learning for NLP, pages 166-176, Online. Association for Computational Linguistics. +Kristina Toutanova and Danqi Chen. 2015. Observed versus latent features for knowledge base and text inference. In Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality, pages 57-66, Beijing, China. Association for Computational Linguistics. +Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, ICML'16, pages 2071-2080. JMLR.org. +Shikhar Vashishth, Soumya Sanyal, Vikram Nitin, and Partha Talukdar. 2020. Composition-based multi-relational graph convolutional networks. In International Conference on Learning Representations. +Bin Wang, Guangtao Wang, Jing Huang, Jiaxuan You, Jure Leskovec, and C-C Jay Kuo. 2021a. Inductive learning on commonsense knowledge graph completion. International Joint Conference on Neural Networks (IJCNN). +Bo Wang, Tao Shen, Guodong Long, Tianyi Zhou, Ying Wang, and Yi Chang. 2021b. Structure-augmented text representation learning for efficient knowledge graph completion. In Proceedings of the Web Conference 2021, WWW '21, page 1737-1748, New York, NY, USA. Association for Computing Machinery. +Tongzhou Wang and Phillip Isola. 2020. Understanding contrastive representation learning through alignment and uniformity on the hypersphere. In International Conference on Machine Learning, pages 9929-9939. PMLR. +Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics. +Bishan Yang, Scott Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2015. Embedding entities and relations for learning and inference in knowledge bases. In Proceedings of the International Conference on Learning Representations (ICLR) 2015. +Liang Yao, Chengsheng Mao, and Yuan Luo. 2019. KG-BERT: BERT for knowledge graph completion. CoRR, abs/1909.03193. + +# A Dataset Information + +We report the details for the datasets used in this work in Table 7. For SNOMED CT Core, CN82k, and FB15k-237 we utilize the textual descriptions used by Lovelace et al. (2021). For SNOMED CT Core and CN82k, these consist of short entity names. For FB15k-237, the descriptions are short paragraphs that describe the entity. For the WN18RR dataset, we utilize the entity descriptions released by Yao et al. (2019), which consist of the word and a short definition. Unless otherwise stated, we utilize PubmedBERT to extract embeddings for the SNOMED CT Core dataset and utilize the uncased version of BERT-base for the other three datasets. + +# B Evaluation Metrics + +We present a mathematical formulation of our evaluation metrics. We consider both forward and inverse relations for the datasets examined in this work. For the CN82k and FB15k-237 datasets, we follow standard procedure and introduce an inverse fact, $(e_l,r_j^{-1},e_i)$ , for every fact, $(e_i,r_j,e_l)$ , in the dataset. The SNOMED CT Core dataset already contains inverse relations so manually adding inverse facts in unnecessary. We let $\mathcal{T}$ denote the set of all facts in the test set. + +The Mean Reciprocal Rank (MRR) is defined as + +$$ +\mathrm {M R R} = \frac {1}{| \mathcal {T} |} \sum_ {(e _ {i}, r _ {j}, e _ {l}) \in \mathcal {T}} \frac {1}{\operatorname {r a n k} (e _ {l})} +$$ + +The Hits at k (H@k) is defined as + +$$ +\mathrm {H} @ \mathrm {k} = \frac {1}{| \mathcal {T} |} \sum_ {\left(e _ {i}, r _ {j}, e _ {l}\right) \in \mathcal {T}} I [ \operatorname {r a n k} \left(e _ {l}\right) \leq k ] +$$ + +where $I[P]$ is 1 if the condition $P$ is true and is 0 otherwise. When computing $\mathrm{rank}(x_i)$ , we first filter out all positive samples other than the target entity $x_i$ . This is commonly referred to as the filtered setting. If the correct entity is tied with some other entity, then we compute its rank as the average rank of all entities with that score. + +# C Model Configuration Details + +# C.1 Trainable Parameters + +We report parameter counts for the WN18RR dataset across all the different configurations considered in this work in in Table 8. The parameter + +counts are identical across datasets with the exception of the relation parameters which depends upon the number of relations within each KG. The relation parameters make up a small portion of the overall parameters and are unaffected by the methods introduced in this work, so we simply report parameter counts for the WN18RR dataset for brevity. + +The unsupervised Normalizing Flow technique can be applied prior to training and thus introduces zero additional trainable parameters for the ranking model. The supervised MLP and Residual MLP techniques introduce only $3.6\%$ additional trainable parameters compared to the baseline model. + +Directly fine-tuning the language model during training increases the number of trainable parameters by $331.2\%$ because even the BERT-base model is over 3 times the size of the ranking model. The parameter-efficient methods, on the other hand, have a much more modest effect with the Linear Probe increasing the parameters by $3.0\%$ and Prompt Tuning increasing the model size by $1.2\%$ . + +# C.2 Training Time + +We compare the training times across our different configurations. We report details for the candidate processing methods explored in Section 5 in Table 9. The normalizing flow technique has a negligible impact on training time because the unsupervised technique can be applied prior to training. The Residual MLP does increase the time per iteration as observed by the increased time needed to complete one epoch. However, the Residual MLP also accelerates convergence which largely offsets the aforementioned slowdown. Across all datasets, the Residual MLP outperforms the baseline even when controlling for wall clock time for all but the shortest of training times. + +We report the training times for the query entity extraction methods explored in Section 6 in Table 10. The supervised methods introduce the application of a language model which also increases the time per iterations as seen by the time needed to complete one epoch. The effect on accelerating the convergence of the model is not as pronounced which means that in some cases, the supervised query extraction techniques do meaningfully increase the training time compared to the baseline. + +
Dataset# Nodes# Rels# Train# Valid# Test
SNOMED-CT Core77,316140502,22471,778143,486
CN82K78,3343481,92010,24010,240
FB15K-23714,451237272,11517,53520,466
WN18RR40,9431186,8353,0343,134
+ +Table 7: Dataset statistics + +
ConfigurationTrainable ParamsDelta (%)
BERT-base
BERT-ResNet33.2M-
+Normalizing Flow33.2M0%
+Fine-tuning143.1M331.2%
+Linear Probe34.2M3.0%
+Prompt Tuning33.6M1.2%
+MLP34.4M3.6%
+Residual MLP34.4M3.6%
+Prompt Tuning34.8M4.8%
BERT-large
BERT-ResNet58.9M-
+Residual MLP61.0M3.6%
+Prompt Tuning61.6M4.7%
+ +Table 8: Parameter Counts for WN18RR Models + +# D Additional Figures + +# D.1 Effect Of Embedding Processing Techniques + +We report the embedding metrics across all datasets in Figure 5. + +# D.2 Effect Of Query Extraction Techniques + +We report the performance of different query entity extraction techniques broken down by the connectivity of the query entity in Figure 6. + +# E Implementation Details + +We outline our implementation details below. We begin by outlining the details shared across all experiments and then outline the details specific to the experiments performed for each of the experiments. + +# E.1 Training Procedure + +We train all ranking models for a maximum of 200 epochs and terminate training if the validation MRR has not improved for 20 epochs. We evaluate the model with the highest validation MRR upon the test set. + +We use a batch size of 64 with the 1vsAll training strategy (Ruffinelli et al., 2020) with the binary + +cross entropy loss function. We use the Adam optimizer (Kingma and Ba, 2015) with decoupled weight decay regularization (Loshchilov and Hutter, 2019). We set the learning rate to 1e-3 and set the weight decay coefficient to 1e-4. We reduce the learning rate by a factor of 0.5 if the validation MRR has plateaued for 3 epochs. We use label smoothing with a value of 0.1, clip gradients to a max value of 1. + +# E.2 BERT-ResNet + +We reuse the reported hyperparameters from Lovelace et al. (2021) for the BERT-ResNet ranking architecture which we redescribe here. We set $f = 5$ where $f$ is the hyperparameter that controls the side length of the spatial feature map produced by the initial 1D convolution. We set $N = 2$ where $N$ controls the depth of the convolutional network. Our BERT-ResNet model then consists of $3N = 6$ bottleneck convolutional blocks. The dimensionality of the model is simply determined by the dimensionality of the language model, e.g. $d = 768$ for experiments with BERT-base and PubmedBERT and $d = 1024$ for experiments with BERT-large. We apply dropout with drop probability 0.2 after the embedding layer and apply 2D dropout (Tompson et al., 2015) with the same probability before the convolutions. We apply dropout with probability 0.3 after the final fully connected layer. These hyperparameter values are simply the default values reported by Lovelace et al. (2021). + +# E.3 Candidate Retrieval + +We describe implementation details pertinent to the experiments conducted in Section 5. To isolate the impact of the structure of the entity embedding space, we utilize a single shared bias term across all entities instead of the per-entity bias term utilized by Lovelace et al. (2021). Thus the entity ranking is determined entirely by the query vector and the entity embeddings. All future experiments also use this shared bias term. + +For all of our embedding processing techniques, + +![](images/454981bc81d41a8df6070e4b26b17b9eb824b65922006d49facd522fbf1875f9.jpg) + +![](images/66d883b09a0e9c18ba1205972a2376920be48a249fcd80378ee7ab84b9346c2c.jpg) + +![](images/80c6d1fd52f51f955d97d11efde87543167090b148ba71736287f456e78561d3.jpg) +Figure 5: Intrinsic evaluation of embedding processing techniques. We note the MRR for each approach in parenthesis. + +![](images/d025727a5749a185b009f00317e3cca386f857dc32b48c3b10ea1105b6034c73.jpg) + +![](images/0fe5b8cbd173c7c89b34aaf6ac1afa32ab93a1b0fe5c167a1b5eee539078f7fe.jpg) + +![](images/e044e0f440ebe90d913d4267da54815cc216f2a2a77f05390e6f03e6f409ee13.jpg) + +![](images/ced53898683d981677fd47f474990467f1f813cde9aec47e8e3f9b24287959ae.jpg) +Figure 6: Performance delta of different extraction techniques across queries of varying connectivity. Error bars indicate $95\%$ confidence intervals. + +![](images/155f4b8f2826a41b1191705ea9c2896227571f7ef8c48905c0825d4e9e57f08b.jpg) + +
ConfigurationSNOMED CT CoreCN-82K
Wall Clock TimeWall Clock Time
Per EpochBest Validation MRRBreakeven PointPer EpochBest Validation MRRBreakeven Point
BERT-ResNet12m31s22h57m49s-4m6s5h33m38s-
+Normalizing Flow12m38s28h38m39s1h2m52s4m7s5h21m56s1h6m55s
+Residual MLP22m9s52h5m26s1h6m46s7m20s5h38m22s1h43m9s
ConfigurationFB15k-237WN18RR
Wall Clock TimeWall Clock Time
Per EpochBest Validation MRRBreakeven PointPer EpochBest Validation MRRBreakeven Point
BERT-ResNet11m52s25h7m37s-3m56s11h40m37s-
+Normalizing Flow11m50s18h10m8s35m32s3m56s10h10m7s43m55s
+Residual MLP14m31s15h0m48s14m31s5m44s11h5m2s28m37s
+ +Table 9: Run time for best supervised and unsupervised processing techniques from Section 5. We report the average wall clock time per epoch, the total time until the peak validation MRR, and the breakeven point where the configuration begins to outperform the baseline. + +
ConfigurationSNOMED CT CoreCN-82K
Wall Clock TimeWall Clock Time
Per EpochBest Validation MRRBreakeven PointPer EpochBest Validation MRRBreakeven Point
BERT-ResNet12m32s34h17m41s-4m1s5h25m20s-
+Linear Probe18m51s42h8m20s20h29m38s6m25s4h52m17s4h52m17s
+Prompt-tuning26m10s60h48m38s41h37m57s8m42s11h2m26s5h40m58s
ConfigurationFB15k-237WN18RR
Wall Clock TimeWall Clock Time
Per EpochBest Validation MRRBreakeven PointPer EpochBest Validation MRRBreakeven Point
BERT-ResNet11m52s15h2m11s-3m54s10h42m4s-
+Linear Probe23m19s23h42m57sN/A6m3s10h8m40s6h36m41s
+Prompt-tuning32m43s47h22m32sN/A8m47s20h36m0s14h1m5s
+ +Table 10: Run time for query entity extraction techniques from Section 6. We report the average wall clock time per epoch, the total time until the peak validation MRR, and the breakeven point where the configuration begins to outperform the baseline. + +we decouple the entity embeddings fed to the convolutional model and the entity embeddings used for candidate ranking. All of our transformations are only applied to the entity embeddings used for candidate ranking. + +# E.3.1 Normalizing Flow + +We define the normalizing flow with the transformation $T^{-1}(\mathbf{x}) = \mathbf{W}\mathbf{x} + \mathbf{b}$ where $\mathbf{W} \in \mathbb{R}^{d \times d}$ and $\mathbf{x}, \mathbf{b} \in \mathbb{R}^{d4}$ . To ensure the invertibility of $\mathbf{W}$ and to simplify the computation of the Jacobian determinant, we follow Kingma and Dhariwal (2018) and parameterize $\mathbf{W}$ using its LU decomposition. so $\mathbf{W} = \mathbf{PL}(\mathbf{U} + \mathrm{diag}(\mathbf{s}))$ where $\mathbf{P} \in \mathbb{R}^{d \times d}$ is a permutation matrix, $\mathbf{L} \in \mathbb{R}^{d \times d}$ is a lower triangular + +matrix with ones on the diagonal, $\mathbf{U} \in \mathbb{R}^{d \times d}$ is a strictly upper triangular matrix, and $\mathbf{s} \in \mathbb{R}^d$ is a vector. During the training process, we fix $\mathbf{P}$ and learn the parameters for $\mathbf{L}, \mathbf{U}$ , and $\mathbf{s}$ . + +We train the Normalizing Flow on the set of entity embeddings with a batch size of 64 for a maximum of 500 epochs using a learning rate of 1e-3 with the Adam optimizer (Kingma and Ba, 2015). We clip gradients to a max value of 1 and use the checkpoint that achieved the lowest training loss to transform the embeddings for candidate ranking. We normalize the transformed embeddings to have unit norm before use in candidate ranking so an entity embedding, $\mathbf{e_i}$ , is transformed as $\tilde{\mathbf{e}}_{\mathrm{i}} = \frac{T^{-1}(\mathbf{e}_{\mathrm{i}})}{\|T^{-1}(\mathbf{e}_{\mathrm{i}})\|_2}$ . + +# E.3.2 MLP and Residual MLP + +For the supervised transformations, we set the dimensionality of the hidden layer to match the dimensionality of the entity embeddings. We use a + +ReLU nonlinearity and apply dropout with drop probability 0.1 after the first projection. We found it necessary to reduce the learning rate for the MLP to stabilize training so we set the learning rate to 1e-4 for the MLP parameters. For the residual MLP, we also initialized the final linear layer to zeros so that the candidate embeddings were equivalent to the original embeddings at the start of training. All other hyperparameters remained fixed. + +# E.4 Embedding Extraction Ablation + +We describe implementation details pertinent to the experiments conducted in Section 5.5. We use the HuggingFace Transformers library (Wolf et al., 2020) to work with pretrained language models. For this set of experiments, we utilize the normalizing flow technique for candidate ranking to isolate the effect of the extraction techniques. For the supervised extraction experiments, we utilize the most effective unsupervised embeddings with the normalizing flow for candidate ranking. + +# E.4.1 MLM Pre-training + +We fine-tune the language models using the MLM pretraining objective over the set of textual entity identifiers. We fine-tune the language models for 3 epochs with a batch size of 32 and a learning rate of 3e-5. We use a linear learning rate warmup for first $10\%$ of the total training steps. For SNOMED-CT Core, CN82K, and WN18RR we set the maximum sequence length to 64. For FB15k-237, we set the maximum sequence length to 256 to account for the longer entity descriptions. All other hyperparameters follow the default values from Huggingface. + +# E.5 Query Entity Extraction + +# E.5.1 Linear Projection + +We learn a linear projection that is applied to every hidden state of the frozen model as $\tilde{\mathbf{h}}_{\mathbf{l},\mathbf{j}} = \mathbf{h}_{\mathbf{l},\mathbf{j}}\mathbf{W}^{\top} + \mathbf{b}$ where $\mathbf{h}_{\mathbf{l},\mathbf{j}}\in \mathbb{R}^{d}$ , $\mathbf{W}\in \mathbb{R}^{d\times d}$ , and $\mathbf{b}\in \mathbb{R}^{d}$ . We then max-pool across every token in each layer to produce a single feature vector for each layer, $\tilde{\mathbf{h}}_{\mathbf{l}}$ . and aggregate these features using a learned linear combination across layers $\tilde{\mathbf{e}}_{\mathbf{i}} = \sum_{l = 1}^{L}\lambda_{l}\cdot \tilde{\mathbf{h}}_{\mathbf{l}}$ where $\lambda_l = \mathrm{softmax}(\mathbf{a})_l$ and $\mathbf{a}\in \mathbb{R}^{L}$ is a learned vector of scalars. We set the learning rate for the parameters for embedding extraction to 5e-5. + +# E.5.2 Prompting + +We learn continuous prompts that we prepend to the language model inputs at every layer to prompt + +the frozen model (Li and Liang, 2021). We parameterize the prompt embeddings, $\mathbf{p}_{i,j} \in \mathbb{R}^{d'}$ , in a low-dimensional space where $d' < d$ , and learn an MLP with one hidden layer to project them to the dimensionality of the language model. We set $d' = 256$ in this work and apply dropout with drop probability 0.1 before the MLP and after the first projection. The dimensionality of the hidden layer is set to $d/2$ . We also apply a shared layer normalization layer to the output of the MLP. + +Therefore the input to the $i^{\mathrm{th}}$ layer of the language model is $\mathbf{s}_i = [\mathrm{LN}(\mathrm{MLP}(\mathbf{p}_{i,0})),\ldots ,\mathrm{LN}(\mathrm{MLP}(\mathbf{p}_{i,k})),\mathbf{x}_{i,0},\ldots ,\mathbf{x}_{i,n}]$ where $\mathrm{LN}(\mathrm{MLP}(\mathbf{p}_{i,j}))\in \mathbb{R}^d$ and $\mathbf{x}_{i,j}\in \mathbb{R}^d$ are the transformed prompt token and tokenized entity embedding respectively for the $j^{\mathrm{th}}$ position at the $i^{\mathrm{th}}$ layer. We use $k = 3$ prompt tokens across all experiments in this work. We extract the entity representation by mean pooling across all intermediate states in each layer and aggregate across layers with a learned linear combination. We set the learning rate for the parameters for embedding extraction to 5e-5. + +# E.6 Effect of Language Model Selection + +We describe implementation details pertinent to the experiments conducted in Section 7. For the unsupervised embedding extraction, we utilize mean-pooled embeddings from language models with additional MLM pretraining upon the set of entity names. All other hyperparameters are kept constant from earlier sections. + +# E.7 Ensembling + +For our ensembling experiment, we train a transformer model that accepts a [CLS] token, the embedded query entity, and the embedded relation entity. This can be viewed as a simplified version of the HittER model from Chen et al. (2021) that doesn't utilize any additional graph context. The [CLS] embedding output from the final layer is used for candidate scoring. + +We tune hyperparameters by running 20 trials of a random search over the grid of hyperparameters defined in Table 11. All models are trained for a maximum of 200 epochs with the AdamW optimizer. We linearly warm up the learning rate for the first 4000 steps before annealing it with a cosine decay schedule over the rest of training. We clip all gradient norms to 1 and apply early stopping with a patience of 50 epochs. + +
HyperparameterSearch RangeSelected Value
FB15k-237WN18RR
Learning Rate[3e-3, 1e-3, 5e-4, 3e-4, 1e-4]3e-43e-3
Weight Decay[.3, .1, .03, .01, .001, 1e-4, 1e-5].010.1
Output Dropout[.1, .2, .3, .4, .5, .6, .7].7.5
Input Dropout[.1, .2, .3, .4, .5, .6, .7].6.5
Label Smoothing[.1, .2, .3, .4, .5, .6].2.2
Number Layers[4,5,6]65
Attention Heads888
Embedding Dim320320320
Feedforward Dim128012801280
+ +Table 11: Hyperparameter Search Space for Transformer Model + +# F Validation Results + +We report the validation results corresponding to our final results reported in Table 5 in Table 12 + +
SNOMED CT CoreCN-82KFB15K-237WN18RR
MRRH@1H@3H@10MRRH@1H@3H@10MRRH@1H@3H@10MRRH@1H@3H@10
BERT-ResNet + Normalizing Flow.510.403.568.714.196.133.216.323.362.279.393.529.582.511.610.729
BERT-ResNet + Prompt-tuning + Normalizing Flow.517.411.574.719.202.137.223.329.361.278.394.530.591.521.618.736
BERT-ResNet + Residual MLP.551.445.608.754.213.142.235.356.378.286.414.564.592.521.621.737
BERT-ResNet + Prompt-tuning + Residual MLP.551.444.612.757.218.146.240.363.377.287.410.564.600.531.626.742
+ +Table 12: Validation results corresponding to results reported in Table 5. \ No newline at end of file diff --git a/aframeworkforadaptingpretrainedlanguagemodelstoknowledgegraphcompletion/images.zip b/aframeworkforadaptingpretrainedlanguagemodelstoknowledgegraphcompletion/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..b0d2cfc298dc3f48933497454e82d474a5de0f41 --- /dev/null +++ b/aframeworkforadaptingpretrainedlanguagemodelstoknowledgegraphcompletion/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:40c68303b4cb739e8989c8f1fba6ed0a074d2d870e9be2423fba39b3265b19da +size 969949 diff --git a/aframeworkforadaptingpretrainedlanguagemodelstoknowledgegraphcompletion/layout.json b/aframeworkforadaptingpretrainedlanguagemodelstoknowledgegraphcompletion/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..b2d3512b09773307d8a8604ffc9ef02142cfd62c --- /dev/null +++ b/aframeworkforadaptingpretrainedlanguagemodelstoknowledgegraphcompletion/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8d81f747032c3cd75d8dd7609474dc3aa05b18985eb2cde974b264170ddcc668 +size 586628 diff --git a/africlirmatrixenablingcrosslingualinformationretrievalforafricanlanguages/9d83c131-d685-4e08-9633-dcaab39160ad_content_list.json b/africlirmatrixenablingcrosslingualinformationretrievalforafricanlanguages/9d83c131-d685-4e08-9633-dcaab39160ad_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..9ace38c7222a74a0eb82502c0864055dcf76e09d --- /dev/null +++ b/africlirmatrixenablingcrosslingualinformationretrievalforafricanlanguages/9d83c131-d685-4e08-9633-dcaab39160ad_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:07435eff48397cfb39744d196c99d22e70a032eb7179d497a24efce391f10b42 +size 51796 diff --git a/africlirmatrixenablingcrosslingualinformationretrievalforafricanlanguages/9d83c131-d685-4e08-9633-dcaab39160ad_model.json b/africlirmatrixenablingcrosslingualinformationretrievalforafricanlanguages/9d83c131-d685-4e08-9633-dcaab39160ad_model.json new file mode 100644 index 0000000000000000000000000000000000000000..77c6f0bd462fe3d93750ebf24f012b6a647c9a5e --- /dev/null +++ b/africlirmatrixenablingcrosslingualinformationretrievalforafricanlanguages/9d83c131-d685-4e08-9633-dcaab39160ad_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:80bf93e174a063c60ea089369aa4c06bea68b55d4b428cfe9b9d57a2b40bd18f +size 63898 diff --git a/africlirmatrixenablingcrosslingualinformationretrievalforafricanlanguages/9d83c131-d685-4e08-9633-dcaab39160ad_origin.pdf b/africlirmatrixenablingcrosslingualinformationretrievalforafricanlanguages/9d83c131-d685-4e08-9633-dcaab39160ad_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..b47a5905738158f30dff55ecb05f968ba78835f1 --- /dev/null +++ b/africlirmatrixenablingcrosslingualinformationretrievalforafricanlanguages/9d83c131-d685-4e08-9633-dcaab39160ad_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9aab82a7a56bec5332c0cee4b734a8741926ca133d36142a921a2270da0d95e2 +size 234700 diff --git a/africlirmatrixenablingcrosslingualinformationretrievalforafricanlanguages/full.md b/africlirmatrixenablingcrosslingualinformationretrievalforafricanlanguages/full.md new file mode 100644 index 0000000000000000000000000000000000000000..fedcf2ffbd6b8532b978ce33fe333b52116295b9 --- /dev/null +++ b/africlirmatrixenablingcrosslingualinformationretrievalforafricanlanguages/full.md @@ -0,0 +1,163 @@ +# AfriCLIRMatrix: Enabling Cross-Linguual Information Retrieval for African Languages + +Odunayo Ogundepo1, Xinyu Zhang1, Shuo Sun2, Kevin Duh2, and Jimmy Lin1 + +$^{1}$ David R. Cheriton School of Computer Science, University of Waterloo $^{2}$ John Hopkins University + +1{oogundep, xinyucrystina.zhang, jimmylin}@uwaterloo.ca + +$^{2}\{ssun32@jhu.edu, kevinduh@cs.jhu.edu\}$ + +# Abstract + +Language diversity in NLP is critical in enabling the development of tools for a wide range of users. However, there are limited resources for building such tools for many languages, particularly those spoken in Africa. For search, most existing datasets feature few or no African languages, directly impacting researchers' ability to build and improve information access capabilities in those languages. Motivated by this, we created AfriCLIRMatrix, a test collection for cross-lingual information retrieval research in 15 diverse African languages. In total, our dataset contains 6 million queries in English and 23 million relevance judgments automatically mined from Wikipedia inter-language links, covering many more African languages than any existing information retrieval test collection. In addition, we release BM25, dense retrieval, and sparse-dense hybrid baselines to provide a starting point for the development of future systems. We hope that these efforts can spur additional work in search for African languages. AfriCLIRMatrix can be downloaded at https://github.com/castorini/africirlmatrix. + +# 1 Introduction + +The ever-increasing amounts of information on the web in different languages highlight the need for systems that enable users to search in one language and retrieve relevant documents in another. This search task, commonly known as cross-lingual information retrieval (CLIR), is becoming increasingly important. CLIR can break down language barriers between information seekers and the extensive collections of documents that are available in diverse languages. + +One common approach to CLIR takes advantage of machine translation and monolingual information retrieval (Zhou et al., 2012; Jiang et al., 2020). The documents and queries are translated into the same language before search occurs. This + +translation is often performed using a variety of sources, including parallel corpora, bilingual dictionaries, and machine translation (MT) systems. The effectiveness of this approach relies heavily on translation quality, which may be a bottleneck for low-resource languages where high-quality translations are not readily available. + +To address this challenge, researchers have recently explored the use of pretrained multilingual models (MacAvaney et al., 2020; Shi et al., 2020). Examples such as mBERT (Devlin et al., 2019) and XLM-R (Conneau et al., 2020) are often pretrained on a large collection of multilingual texts, enabling the models to learn representations across different languages. The use of multilingual models for CLIR often builds on techniques that have previously been applied to monolingual retrieval (Lin et al., 2021b). + +Regardless of approach, modern neural-based CLIR models are data hungry, typically requiring large amounts of query-document pairs that have been annotated with relevance labels. Such annotated data are expensive to obtain, especially for low-resource African language pairs. Although there is ongoing research on training multilingual models for dense retrieval in low-resource settings (Zhang et al., 2022a,b), there are still not enough resources for these languages. Existing CLIR datasets do contain some African languages, such as CLIRMatrix (Sun and Duh, 2020) and the MATERIAL corpora (Zavorin et al., 2020). However, these collections contain only a few African languages, a tiny fraction of the $2000+$ languages spoken on the continent with hundreds of millions of speakers (Eberhard et al., 2019). The paucity of data hinders progress in developing information access capabilities for Africa. + +As a small step towards plugging this gap, we introduce AfriCLIRMatrix, a new test collection for cross-lingual information retrieval containing geographically diverse African languages. This + +resource comprises English queries with query-document relevance judgments in 15 African languages automatically mined from Wikipedia. Although we only cover a small set of languages, our resource already represents a substantial enhancement over existing datasets, as AfriCLIRMatrix covers geographically diverse languages that are collectively spoken by 340 million people in Africa and worldwide. + +We hope that this resource will spur research in retrieval techniques and motivate the development of more robust datasets for information retrieval in African languages. As a start, we provide a number of baselines for researchers to build on: BM25, a multilingual adaptation of DPR known as "mDPR", and a hybrid approach combining the two. + +# 2 Related Work + +NLP for African Languages: Natural language processing for African languages has garnered some attention in recent years and is gradually becoming an area of active research (Adebara and Abdul-Mageed, 2022). This has resulted in efforts directed at creating resources to aid research in these languages. These resources include pretrained language models (Ogundepo et al., 2022; Ogueji et al., 2021) as well as datasets for a range of common tasks (Nekoto et al., 2020; Adelani et al., 2022, 2021; Muhammad et al., 2022). + +Cross-Lingual Information Retrieval: The main goal of information retrieval systems is to help users identify relevant information. In some cases, information exists in multiple languages, hence the need for cross-lingual information retrieval (Nie, 2010). While such systems enable users to access documents in foreign languages, sufficient quantities of high-quality bilingual data required to build effective CLIR systems are often unavailable for low-resource languages (Zavorin et al., 2020). It is often expensive, time-consuming, and labor-intensive to build high-quality annotated datasets in multiple languages. + +Researchers have since explored the use of automated pipelines to construct datasets for multilingual and cross-lingual information retrieval. One such pipeline is the translation of documents/queries into the desired language. For instance, Bonifacio et al. (2021) used an automatic neural machine translation system to create a multilingual version of the MS MARCO dataset (Bajaj et al., 2018) in 13 languages. Other researchers sim + +ply incorporated translation in their CLIR systems (Zhang et al., 2019; Nair et al., 2020). + +Another common approach is to exploit existing large multilingual corpora, e.g., the Common Crawl1 and Wikipedia. For example, the HC4 corpus for cross-lingual information retrieval was created from Common Crawl data (Lawrie et al., 2022). Examples of exploiting Wikipedia for CLIR include WikiCLIR (Schanoni et al., 2014), CLIR-Matrix (Sun and Duh, 2020), Large Scale CLIR (Sasaki et al., 2018), among others. Although these collections typically feature a diversity of languages, they do not in general contain many African languages. Our work builds on Sun and Duh (2020) and is to our knowledge the first cross-lingual information retrieval dataset to specifically focus on African languages. + +# 3 AfriCLIRMatrix + +AfriCLIRMatrix is a new information retrieval test collection comprising queries and documents in 15 diverse African languages mined from Wikipedia, the largest such dataset that we are aware of. We focus on cross-lingual information retrieval with queries in English and documents in various African languages, listed in Table 1. We use an automated pipeline to extract document titles from English Wikipedia articles as queries, and use cross-language Wikidata links to find relevant articles in other languages. + +Extraction Pipeline: Our mining pipeline is similar to the one used in Sun and Duh (2020). For every "source" Wikipedia article in language $\mathcal{L}$ , there exist inter-language links that connect the source article to articles about the same topic in other languages. We leverage these connections to extract queries and a set of relevant articles in English, and then use Wikidata backlinks to find relevant articles in other languages if they are available. We use English article titles as queries because they are readily available, span multiple domains, and have articles linked to more languages than any other language in Wikipedia. However, our pipeline also supports other forms of queries, for example, Sasaki et al. (2018) used the first sentence in each article in their dataset. + +To find relevant articles, we use each query to retrieve a set of 100 articles in English using a bag-of-words retrieval system (Elasticsearch).2 Inter + +
LanguageISOFamilyScript#Docs# Total Queries#Total Judgments#Test Queries#Test Judgments
AfrikaansafrIndo-EuropeanLatin102,6751,061,3941,756,0051,5002,557
AmharicamhAfro-AsiaticGe'ez15,458248,672264,6901,5001,582
Moroccan ArabicaryAfro-AsiaticArabic5,074101,222116,475500586
Egyptian ArabicarzAfro-AsiaticArabic1,568,0793,041,53518,598,3981,5009,188
HausahauAfro-AsiaticLatin16,003216,623274,1351,5001,876
IgboiboNiger-CongoLatin4,06666,83578,126500586
Northern SothonsoNiger-CongoLatin8,32077,505112,022500804
ShonasnaNiger-CongoLatin8,258118,120122,483500515
SomalisomAfro-AsiaticLatin9,860193,088206,4311,0001,049
SwahiliswaNiger-CongoLatin70,808697,511883,6571,5001,891
TigrinyatirAfro-AsiaticGe'ez37815,73815,8845050
TwitwiNiger-CongoLatin1,83843,52745,849250258
WolofwolNiger-CongoLatin1,69367,62169,865250255
YorubáyorNiger-CongoLatin33,456323,368430,5331,0001,268
ZuluzulNiger-CongoLatin10,80899,987164,4151,0001,442
Total1,856,5666,372,74623,138,96913,05023,907
+ +Table 1: Dataset information: number of documents, number of English queries, and number of relevance judgments for each language. Table also contains other relevant information such as the language script and family. The total number of documents is equal to the number of Wikipedia articles for each language. + +
DatasetCLIR# Lang.African Languages
WikiCLIR (Schamoni et al., 2014)20
HC4 (Lawrie et al., 2022)30
MATERIAL Corpora (Zavorin et al., 2020)62: Somali, Swahili
CLEF Collection (Saleh and Pecina, 2019)70
Mr. TyDi (Zhang et al., 2021)111: Swahili
mMarco (Bonifacio et al., 2021)130
Large Scale CLIR (Sasaki et al., 2018)251: Swahili
CLIRMatrix (Sun and Duh, 2020)1395: Afrikaans, Amharic, Egyptian Arabic, Swahili, Yorùbá
AfriCLIRMatrix (Ours)1615: see Table 1
+ +Table 2: Dataset comparisons with other multilingual IR datasets: "CLIR" indicates whether the dataset was built for CLIR. "# Lang." shows the total number of languages. The final column lists the African languages in the dataset and their counts. + +language links for the retrieved articles are then used to extract similar articles in other languages. Given that BM25 scores reflect how relevant a document (article) is to a given query, we use the scores to generate relevance judgments for the retrieved documents (articles). The scores are normalized and then converted into discrete relevance grades using the Jenks natural break optimization algorithm (McMaster and McMaster, 2002). The documents are originally judged on a scale of 0 to 6, with 0 being irrelevant and 6 being the most relevant. A score of 0 is assigned to all documents not retrieved by the monolingual English pipeline using Elucidsearch, while a score of 6 is assigned to documents from articles directly connected to the title queries. + +Dataset Statistics: A breakdown of AfriCLIR-Matrix in terms of languages is shown in Table 1. + +Our dataset is based on the Wikipedia dump released on April 4, 2022. The number of Wikipedia documents (articles) for each language is shown in Table 1; the number of documents in the corpus for each language is exactly equal to the number of Wikipedia articles in the corresponding dump. Due to the lack of sufficient articles for some languages, we filter out low-quality queries for each language by discarding queries whose relevant documents all have low scores (1, 2, and 3). Thus, we retain only queries where there is at least one relevant document with score $\geq 4$ . + +Comparison with other datasets: Table 2 shows a comparison of AfriCLIRMatrix with existing multilingual and cross-lingual datasets. The main comparison here is the number of African languages present in each dataset. Of all the African languages, Swahili appears to be the best-covered + +
aframharyarzhauibonsosnasomswatirtwiwolyorzulavg
Latin?××××-
nDCG@10
BM250.4340.1590.1670.2680.5080.5180.4450.2620.3050.4180.0800.5130.1340.4840.2470.329
mDPR0.3090.2150.3550.1180.2690.3380.2820.3510.2180.3350.2650.3330.2320.3770.1780.281
Hybrid0.4640.2280.3500.2570.5080.5800.5260.3940.3440.4770.2390.5470.2330.5320.2730.397
Recall@100
BM250.5840.1740.2240.3090.6500.6850.6290.3460.4030.5560.0800.5600.1660.6270.2890.418
mDPR0.5910.3820.6940.2480.5420.6680.6700.6420.4450.5950.5800.6640.5480.6550.3610.552
Hybrid0.7270.3880.6980.4160.7220.8040.7660.6840.5350.6900.6000.7320.5560.7500.4480.634
+ +Table 3: Baseline results on the AfriCLIRMatrix test set for our three baselines: BM25, mDPR, and Hybrid. The best condition for each language is bolded. The top row indicates whether the language is written in Latin script. + +![](images/985331bbe26d2b25ad7368ce5d06b072851073ae1f14d8573a6034e80557f5b6.jpg) +Figure 1: Bar plots of nDCG@10 scores from Table 3 sorted by total judgements. There does not appear to be a correlation between data size and effectiveness. + +language in the listed datasets. This is because Swahili has relatively more accessible monolingual data compared to the other languages. As far as we know, our dataset covers the most African languages of any comparable resource. + +# 4Baselines + +As a starting point for future research, we release BM25, mDPR, and sparse-hybrid baselines for AfriCLIRMatrix. For each language, we split the extracted queries into training and test sets, as shown in Table 1. We perform experiments on the test set and report nDCG@10 and Recall@100 scores for all conditions. Detailed instructions for reproducing all of these experiments can be found in our repository. + +BM25: We report a bag-of-words BM25 (Robertson and Zaragoza, 2009) baseline obtained using the implementation provided by the Anserini IR toolkit (Yang et al., 2018), which is built on the Lucene open-source search library. We use the default Anserini configuration ( $k_{1} = 0.9$ , $b = 0.4$ ) and whitespace tokenization for analyzing the doc + +uments (and queries) since Lucene does not currently provide language-specific analyzers for any of the languages in AfriCLIRMatrix. Note that in this condition we are applying the same exact analyzer to both queries and documents (in different languages); see discussion of results below. + +mDPR: We also report zero-shot results from mDPR, which is a multilingual adaptation of the Dense Passage Retriever (DPR) model (Karpukhin et al., 2020), where BERT in DPR is simply replaced with multilingual BERT (mBERT). The mDPR implementation in our experiments adopts a shared-encoder design (i.e., the same encoder for queries and passages) and was fine-tuned on the MS MARCO passage ranking dataset (Bajaj et al., 2018). Zhang et al. (2022a) showed this to be an effective baseline. Retrieval is performed in a zero-shot manner using the Faiss flat index implementation provided by the Pyserini IR toolkit (Lin et al., 2021a). + +Hybrid: Hybrid results are a combination of sparse and zero-shot dense retrieval runs described above. The dense and sparse retrieval runs are combined + +using Reciprocal Rank Fusion (RRF) (Cormack et al., 2009). + +Although queries and documents in our experiments are not in the same language, we observe that BM25 provides a strong baseline. This makes sense since, due to the nature of Wikipedia article titles, most of the queries are named entities. English entities often appear in non-English articles, either because the entity has the same surface form or due to code switching. This makes it possible to retrieve relevant content based solely on exact lexical matches. + +Results in Table 3 show that mDPR effectiveness varies across languages, but overall it is not as effective as BM25. Given the prevalence of entity-centric queries, this finding is consistent with Sciavolino et al. (2021). We observe a clear connection between the script of the language and the relative effectiveness of BM25 vs. mDPR in terms of nDCG@10. Among the 11 languages that use the Latin script, BM25 outperforms mDPR on all but sna and wol; Similarly, among the other 4 languages, mDPR outperforms BM25 on all but arz. These results are expected, as lexical matching is straightforward when queries and documents are in the same script. Overall, we see that dense retrievers still have a long way to go for effective cross-lingual information retrieval. + +Finally, results demonstrate the effectiveness of combining sparse and dense retrieval. For 11 languages, the hybrid approach is more effective than either in terms of nDCG@10. This means that, even though mDPR is less effective than BM25 in most cases, it can still provide complementary relevance signals to improve BM25 rankings. + +# 5 Conclusion and Future Work + +To spur interest in information retrieval research and development for African languages, we introduce a new dataset for cross-lingual information retrieval in 15 languages across different African regions. AfriCLIRMatrix is a collection of bilingual datasets with English queries and documents in 15 African languages. In addition to releasing the resource, we also provide baselines as a starting point for further research. + +# 6 Limitations + +Language Coverage & Diversity: Although our dataset covers 15 African languages, we still fall far short of the over $2000+$ languages spoken on + +the continent. However, it is worth noting that our dataset covers the largest African languages in terms of the number of speakers. Collectively, languages in our dataset are spoken by an estimated 340 million people. In terms of typological diversity, we cover three language families (Niger-Congo, Indo-European, Afro-Asiatic), but are missing others due to the lack of data in Wikipedia. + +English-Centric Queries: Our dataset only contains English queries. Ideally, we would like to provide queries in all 15 African languages, but this is technically challenging due to the way we construct the collection: We first query for documents in-language, then propagate the relevance labels to a new language via Wikidata links. + +We did explore running our data extraction pipeline on all pairs of languages, but the results were too sparse to be useful. One ramification of bootstrapping the collection from English queries and associated relevance judgments on English Wikipedia documents is that there may exist bias in the types of queries (e.g., fewer questions about African people and events compared to English) and in the way they are answered. We acknowledge this limitation; in future work, it will be important to investigate other data creation methods that yield African-centric queries. + +Incomplete Inter-language Links: Wikipedia provides inter-language links connecting articles on the same topic in different languages. While running our data creation pipeline, we observed that some links to existing articles in other languages are missing. In particular, these links are often limited and exist only for high-resource languages. Therefore, we might have missed the labeling of some relevant documents. For future work, we will explore the use of cross-lingual link discovery systems (Lefever et al., 2012) to update existing inter-language links and improve the dataset. Also, the absence of human-annotated relevance judgments directly impacts the quality of the dataset. We instead present this work as a starting point for future research in creating more IR resources for African languages. + +# Acknowledgements + +This research was supported in part by the Canada First Research Excellence Fund and the Natural Sciences and Engineering Research Council (NSERC) of Canada; computational resources were provided by Compute Ontario and Compute Canada. + +# References + +Ife Adebara and Muhammad Abdul-Mageed. 2022. Towards Afrocentric NLP for African languages: Where we are and where we can go. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3814-3841, Dublin, Ireland. Association for Computational Linguistics. +David Adelani, Jesujoba Alabi, Angela Fan, Julia Kreutzer, Xiaoyu Shen, Machel Reid, Dana Ruiter, Dietrich Klakow, Peter Nabende, Ernie Chang, Tajuddeen Gwadabe, Freshia Sackey, Bonaventure F. P. Dossou, Chris Emezue, Colin Leong, Michael Beukman, Shamsuddeen Muhammad, Guyo Jarso, Oreen Yousuf, Andre Niyongabo Rubungo, Gilles Hacheme, Eric Peter Wairagala, Muhammad Umair Nasir, Benjamin Ajibade, Tunde Ajayi, Yvonne Gitau, Jade Abbott, Mohamed Ahmed, Millicent Ochieng, Anuoluwapo Aremu, Perez Ogayo, Jonathan Mukiibi, Fatoumata Ouoba Kabore, Godson Kalipe, Derguene Mbaye, Allahsera Auguste Tapo, Victoire Memd-jokam Koagne, Edwin Munkoh-Buabeng, Valencia Wagner, Idris Abdulmumin, Ayodele Awokoya, Happy Buzaaba, Blessing Sibanda, Andiswa Bukula, and Sam Manthalu. 2022. A few thousand translations go a long way! leveraging pre-trained models for African news translation. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3053-3070, Seattle, United States. Association for Computational Linguistics. +David Ifeoluwa Adelani, Jade Abbott, Graham Neubig, Daniel D'souza, Julia Kreutzer, Constantine Lignos, Chester Palen-Michel, Happy Buzaaba, Shruti Rijhwani, Sebastian Ruder, Stephen Mayhew, Israel Abebe Azime, Shamsuddeen H. Muhammad, Chris Chinenye Emezue, Joyce Nakatumba-Nabende, Perez Ogayo, Aremu Anuoluwapo, Catherine Gitau, Derguene Mbaye, Jesujoba Alabi, Seid Muhie Yimam, Tajuddeen Rabiu Gwadabe, Ignatius Ezeani, Rubungo Andre Niyongabo, Jonathan Mukiibi, Verrah Otiende, Iroro Orife, Davis David, Samba Ngom, Tosin Adewumi, Paul Rayson, Mofetoluwa Adeyemi, Gerald Muriuki, Emmanuel Anebi, Chiamaka Chukwuneke, Nkiruka Odu, Eric Peter Wairagala, Samuel Oyerinde, Clemencia Siro, Tobias Saul Bateesa, Temilola Oloyede, Yvonne Wambui, Victor Akinode, Deborah Nabagereka, Maurice Katusiime, Ayodele Awokoya, Mouhamadane MBOUP, Dibora Gebreyohannes, Henok Tilaye, Kelechi Nwaike, Degaga Wolde, Abdoulaye Faye, Blessing Sibanda, Orevaoghene Ahia, Bonaventure F. P. Dossou, Kelechi Ogueji, Thierno Ibrahima DIOP, Abdoulaye Diallo, Adewale Akinfaderin, Tendai Marengereke, and Salomey Osei. 2021. MasakhaNER: Named entity recognition for African languages. Transactions of the Association for Computational Linguistics, 9:1116-1131. +Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, An + +drew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, and Tong Wang. 2018. MS MARCO: A Human Generated MAchine Reading CComprehension Dataset. arXiv:1611.09268v3. +Luiz Bonifacio, Vitor Jeronymo, Hugo Queiroz Abonizio, Israel Campiotti, Marzieh Fadaee, Roberto Lotufo, and Rodrigo Nogueira. 2021. mMARCO: A multilingual version of MS MARCO passage ranking dataset. arXiv:2108.13897. +Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettle-moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8440-8451, Online. Association for Computational Linguistics. +Gordon V. Cormack, Charles L A Clarke, and Stefan Buettcher. 2009. Reciprocal rank fusion outperforms condorcet and individual rank learning methods. In Proceedings of the 32nd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '09, page 758-759, New York, NY, USA. Association for Computing Machinery. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. +David M. Eberhard, Gary F. Simons, and Charles D. Fennig. 2019. *Ethnologue: Languages of the World*, 22nd edition. SIL International, Dallas. +Zhuolin Jiang, Amro El-Jaroudi, William Hartmann, Damianos Karakos, and Lingjun Zhao. 2020. Cross-lingual information retrieval with BERT. In Proceedings of the workshop on *Cross-Language Search and Summarization of Text* and Speech (CLSSTS2020), pages 26–31, Marseille, France. European Language Resources Association. +Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769-6781, Online. Association for Computational Linguistics. +Dawn Lawrie, James Mayfield, Douglas W. Oard, and Eugene Yang. 2022. HC4: A new suite of test collections for ad hoc CLIR. In Proceedings of the 44th European Conference on Information Retrieval (ECIR 2022). + +Els Lefever, Véronique Hoste, and Martine De Cock. 2012. Discovering missing Wikipedia inter-language links by means of cross-lingual word sense disambiguation. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12), pages 841-846, Istanbul, Turkey. European Language Resources Association (ELRA). +Jimmy Lin, Xueguang Ma, Sheng-Chieh Lin, Jheng-Hong Yang, Ronak Pradeep, and Rodrigo Nogueira. 2021a. Pyserini: A Python toolkit for reproducible information retrieval research with sparse and dense representations. In Proceedings of the 44th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2021), pages 2356-2362. +Jimmy Lin, Rodrigo Nogueira, and Andrew Yates. 2021b. Pretrained Transformers for Text Ranking: BERT and Beyond. Morgan & Claypool Publishers. +Sean MacAvaney, Luca Soldaini, and Nazli Goharian. 2020. Teaching a new dog old tricks: Resurrecting multilingual retrieval using zero-shot learning. In Proceedings of the 42nd European Conference on IR Research, Part II, page 246-254. +Robert B. McMaster and Susanna McMaster. 2002. A history of twentieth-century american academic cartography. Cartography and Geographic Information Science, 29:305 - 321. +Shamsuddeen Hassan Muhammad, David Ifeoluwa Adelani, Sebastian Ruder, Ibrahim Sa'id Ahmad, Idris Abdulmumin, Bello Shehu Bello, Monojit Choudhury, Chris Chinenye Emezue, Saheed Salahudeen Abdullahi, Anuoluwapo Aremu, Alipio Jorge, and Pavel Brazdil. 2022. NaijaSenti: A Nigerian Twitter sentiment corpus for multilingual sentiment analysis. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 590-602, Marseille, France. European Language Resources Association. +Suraj Nair, Petra Galuscakova, and Douglas W. Oard. 2020. Combining contextualized and noncontextualized query translations to improve CLIR. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 202), page 1581-1584. +Wilhelmina Nekoto, Vukosi Marivate, Tshinondiwa Matsila, Timi Fasubaa, Taiwo Fagbohungbe, Solomon Oluwole Akinola, Shamsuddeen Muhammad, Salomon Kabongo Kabenamualu, Salomey Osei, Freshia Sackey, Rubungo Andre Niyongabo, Ricky Macharm, Perez Ogayo, Orevaoghene Ahia, Musie Meressa Berhe, Mofetoluwa Adeyemi, Masabata Mokgesi-Selinga, Lawrence Okegbemi, Laura Martinus, Kolawole Tajudeen, Kevin Degila, Kelechi Ogueji, Kathleen Siminyu, Julia Kreutzer, Jason Webster, Jamiil Toure Ali, Jade Abbott, Iroro Orife, Ignatius Ezeani, Idris Abdulkadir Dangana, Herman Kamper, Hady Elsahar, Goodness Duru, Ghollah Kioko, Murhabazi Espoir, Elan van + +Biljon, Daniel Whitenack, Christopher Onyefuluchi, Chris Chinenye Emezue, Bonaventure F. P. Dossou, Blessing Sibanda, Blessing Bassey, Ayodele Olabiyi, Arshath Ramkilowan, Alp Öktem, Adewale Akinfaderin, and Abdallah Bashir. 2020. Participatory research for low-resourced machine translation: A case study in African languages. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, pages 2144–2160, Online. Association for Computational Linguistics. +Jian-Yun Nie. 2010. *Cross-Language Information Retrieval*. Morgan & Claypool Publishers. +Kelechi Ogueji, Yuxin Zhu, and Jimmy Lin. 2021. Small data? No problem! Exploring the viability of pretrained multilingual language models for low-resourced languages. In Proceedings of the 1st Workshop on Multilingual Representation Learning, pages 116-126, Punta Cana, Dominican Republic. Association for Computational Linguistics. +O Dunayo Ogundepo, Akintunde Oladipo, Mofetoluwa Adeyemi, Kelechi Ogueji, and Jimmy Lin. 2022. AfriTeVA: Extending "small data" pretraining approaches to sequence-to-sequence models. In Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing, pages 126-135, Hybrid. Association for Computational Linguistics. +Stephen Robertson and Hugo Zaragoza. 2009. The probabilistic relevance framework: BM25 and beyond. Foundation and Trends in Information Retrieval, 3(4):333-389. +Shadi Saleh and Pavel Pecina. 2019. An extended CLEF eHealth test collection for cross-lingual information retrieval in the medical domain. In Proceedings of the 41st European Conference on Information Retrieval (ECIR 2019), pages 188-195. +Shota Sasaki, Shuo Sun, Shigehiko Schamoni, Kevin Duh, and Kentaro Inui. 2018. Cross-lingual learning-to-rank with shared representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 458-463, New Orleans, Louisiana. Association for Computational Linguistics. +Shigehiko Schamoni, Felix Hieber, Artem Sokolov, and Stefan Riezler. 2014. Learning translational and knowledge-based similarities from relevance rankings for cross-language retrieval. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 488-494, Baltimore, Maryland. Association for Computational Linguistics. +Christopher Sciavolino, Zexuan Zhong, Jinhyuk Lee, and Danqi Chen. 2021. Simple entity-centric questions challenge dense retrievers. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6138-6148, Online + +and Punta Cana, Dominican Republic. Association for Computational Linguistics. +Peng Shi, He Bai, and Jimmy Lin. 2020. Cross-lingual training of neural models for document ranking. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, pages 2768–2773, Online. Association for Computational Linguistics. +Shuo Sun and Kevin Duh. 2020. CLIRMatrix: A massively large collection of bilingual and multilingual datasets for cross-lingual information retrieval. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4160–4170, Online. Association for Computational Linguistics. +Peilin Yang, Hui Fang, and Jimmy Lin. 2018. Anserini: Reproducible ranking baselines using Lucene. Journal of Data and Information Quality, 10(4):Article 16. +Ilya Zavorin, Aric Bills, Cassian Corey, Michelle Morrison, Audrey Tong, and Richard Tong. 2020. Corpora for cross-language information retrieval in six less-resourced languages. In Proceedings of the workshop on Cross-Language Search and Summarization of Text and Speech (CLSSTS2020), pages 7-13, Marseille, France. European Language Resources Association. +Rui Zhang, Caitlin Westerfield, Sungrok Shim, Garrett Bingham, Alexander Fabbri, William Hu, Neha Verma, and Dragomir Radev. 2019. Improving low-resource cross-lingual document retrieval by reranking with deep bilingual representations. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3173-3179, Florence, Italy. Association for Computational Linguistics. +Xinyu Zhang, Xueguang Ma, Peng Shi, and Jimmy Lin. 2021. Mr. TyDi: A multi-lingual benchmark for dense retrieval. In Proceedings of the 1st Workshop on Multilingual Representation Learning, pages 127-137, Punta Cana, Dominican Republic. Association for Computational Linguistics. +Xinyu Zhang, Kelechi Ogueji, Xueguang Ma, and Jimmy Lin. 2022a. Towards best practices for training multilingual dense retrieval models. arXiv:2204.02363. +Xinyu Zhang, Nandan Thakur, Odunayo Ogundepo, Ehsan Kamalloo, David Alfonso-Hermelo, Xiaoguang Li, Qun Liu, Mehdi Rezagholizadeh, and Jimmy Lin. 2022b. Making a MIRACL: Multilingual information retrieval across a continuum of languages. arXiv:2210.09984. +Dong Zhou, Mark Truran, Tim Brailsford, Vincent Wade, and Helen Ashman. 2012. Translation techniques in cross-language information retrieval. ACM Computing Surveys, 45(1). \ No newline at end of file diff --git a/africlirmatrixenablingcrosslingualinformationretrievalforafricanlanguages/images.zip b/africlirmatrixenablingcrosslingualinformationretrievalforafricanlanguages/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..7c38f0b6093c9b4c5bc6ae00e832cb6981f4a446 --- /dev/null +++ b/africlirmatrixenablingcrosslingualinformationretrievalforafricanlanguages/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bc01ef7f83ce331cc735c11448f4835b38bcf90ef8f1a334a1c9ff6da4502da6 +size 304133 diff --git a/africlirmatrixenablingcrosslingualinformationretrievalforafricanlanguages/layout.json b/africlirmatrixenablingcrosslingualinformationretrievalforafricanlanguages/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..22f23ce66e54d2a1efab6e8792275480c44e02f1 --- /dev/null +++ b/africlirmatrixenablingcrosslingualinformationretrievalforafricanlanguages/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9a1665102e443861dcd522e9fdf9b4fe7481a2526dda746ccdf4f1cfe3eb8eec +size 186738 diff --git a/afrolidaneurallanguageidentificationtoolforafricanlanguages/23baf2a8-b0cd-4131-a726-5ad5bc1433a9_content_list.json b/afrolidaneurallanguageidentificationtoolforafricanlanguages/23baf2a8-b0cd-4131-a726-5ad5bc1433a9_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..e22ea38b5c502a5011037a607baf6c74213c983f --- /dev/null +++ b/afrolidaneurallanguageidentificationtoolforafricanlanguages/23baf2a8-b0cd-4131-a726-5ad5bc1433a9_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5e360b1a96f3ba3d8738efab9d7c7cae8edafaf7b652a99e7210ab34f5f2341a +size 153183 diff --git a/afrolidaneurallanguageidentificationtoolforafricanlanguages/23baf2a8-b0cd-4131-a726-5ad5bc1433a9_model.json b/afrolidaneurallanguageidentificationtoolforafricanlanguages/23baf2a8-b0cd-4131-a726-5ad5bc1433a9_model.json new file mode 100644 index 0000000000000000000000000000000000000000..7ba9e1df46fdb51ab80a12cb73a222f228de7142 --- /dev/null +++ b/afrolidaneurallanguageidentificationtoolforafricanlanguages/23baf2a8-b0cd-4131-a726-5ad5bc1433a9_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0dd926cab0a87a323cc0e0e07d7783b180a0d27a8542e82261854509802ef3b3 +size 182767 diff --git a/afrolidaneurallanguageidentificationtoolforafricanlanguages/23baf2a8-b0cd-4131-a726-5ad5bc1433a9_origin.pdf b/afrolidaneurallanguageidentificationtoolforafricanlanguages/23baf2a8-b0cd-4131-a726-5ad5bc1433a9_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..68024643a09c5fde225d2cb6bd9fb82a45a88e13 --- /dev/null +++ b/afrolidaneurallanguageidentificationtoolforafricanlanguages/23baf2a8-b0cd-4131-a726-5ad5bc1433a9_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fc7468ad7bff4f251877a7c63c4dddcf42cfe6f6990f07f1c422b8bef65bc446 +size 3314952 diff --git a/afrolidaneurallanguageidentificationtoolforafricanlanguages/full.md b/afrolidaneurallanguageidentificationtoolforafricanlanguages/full.md new file mode 100644 index 0000000000000000000000000000000000000000..235d06b9ae8fa072d33948d0517c4c961eeaa8a3 --- /dev/null +++ b/afrolidaneurallanguageidentificationtoolforafricanlanguages/full.md @@ -0,0 +1,445 @@ +# AfroLID: A Neural Language Identification Tool for African Languages + +Ife Adebara $^{1,\star}$ AbdelRahim Elmadany $^{1,\star}$ Muhammad Abdul-Mageed $^{1,2}$ Alcides Alcoba Inciarte $^{1}$ + +$^{1}$ Deep Learning & Natural Language Processing Group, The University of British Columbia +$^{2}$ Department of Natural Language Processing & Department of Machine Learning, MBZUAI + +{ife.adebara@a.elmadany@,muhammad.mageed@,alcobaaj@mail.}ubc.ca + +# Abstract + +Language identification (LID) is a crucial precursor for NLP, especially for mining web data. Proportionally, most of the world's $7000+$ languages today are not covered by LID technologies. We address this pressing issue for Africa by introducing AfroLID, a neural LID toolkit for 517 African languages and varieties. AfroLID exploits a multi-domain web dataset manually curated from across 14 language families utilizing five orthographic systems. When evaluated on our blind Test set, AfroLID achieves $95.89F_{1}$ -score. We also compare AfroLID to five existing LID tools that each cover a small number of African languages, finding it to outperform them on most languages. We further show the utility of AfroLID in the wild by testing it on the acutely under-served Twitter domain. Finally, we offer a number of controlled case studies and perform a linguistically-motivated error analysis that allow us to both showcase AfroLID's powerful capabilities and limitations. $^{1}$ + +# 1 Introduction + +Language identification (LID) is the task of identifying the human language a piece of text or speech segment belongs to. The proliferation of social media have allowed greater access to multilingual data, making automatic LID an important first step in processing human language appropriately (Tjandra et al., 2021; Thara and Poornachandran, 2021). This includes applications in speech, sign language, handwritten text, and other modalities of language. It also includes distinguishing languages in codemixed datasets (Abdul-Mageed et al., 2020; Thara and Poornachandran, 2021). Unfortunately, for the majority of languages in the world, including most African languages, we do not have the resources for developing LID tools. + +![](images/0b4fb83b30dd8eeec2d2d316bdcbe6435d2c58e425e94ea77dfaf8ef02ed8292.jpg) +Figure 1: All 50 African countries in our data, with our 517 languages/language varieties in colored circles overlayed within respective countries. More details are in Appendix E. + +This situation has implications for the future NLP technologies. For instance, LID has facilitated development of widely multilingual models such mT5 (Xue et al., 2021) and large multilingual datasets such as CCAligned (El-Kishky et al., 2020), ParaCrawl (Esplà et al., 2019), WikiMatrix (Schwenk et al., 2021), OSCAR (Ortiz Suárez et al., 2020), and mC4 (Xue et al., 2021) which have advanced research in NLP. Comparable resources are completely unavailable for the majority of the world's $7000+$ today, with only poor coverage of the so-called low-resource languages (LR). This is partly due to absence of LID tools, and impedes future NLP progress on these languages (Adebara and Abdul-Mageed, 2022). The state of African languages is not any better than other regions: Kreutzer et al. (2021) perform a manual evaluation of 205 datasets involving African languages such as those in CCAligned, ParaCrawl, WikiMatrix, OSCAR, and mC4 and show that at + +least 15 corpora were completely erroneous, a significant fraction contained less than $50\%$ of correct data, and 82 corpora were mislabelled or used ambiguous language codes. These consequently affect the quality of models built with these datasets. Alabi et al. (2020) find that 135K out of 150K words in the fastText embeddings for Yorubá belong to other languages such as English, French, and Arabic. New embedding models created by Alabi et al. (2020) with a curated high quality dataset outperform off-the-shelf fastText embeddings, even though the curated data is smaller. + +In addition to resource creation, lack (or poor performance) of LID tools negatively impacts preprocessing of LR languages since LID can be a prerequisite for determining, e.g., appropriate tokenization. (Duvenhage et al., 2017a). Furthermore, some preprocessing approaches may be necessary for certain languages, but may hurt performance in other languages (Adebara and Abdul-Mageed, 2022). Developing LID tools is thus vital for all NLP. In this work, we focus on LID for African languages and introduce AfroLID. + +AfroLID is a neural LID tool that covers 517 African languages and language varieties $^2$ across 14 language families. The languages covered belong to 50 African countries and are written in five diverse scripts. We show the countries covered by AfroLID in Figure 1. Examples of the different scripts involved in the 517 languages are displayed in Figure 2. To the best of our knowledge, AfroLID supports the largest subset of African languages to date. AfroLID is also usable without any end-user training, and it exploits data from a variety of domains to ensure robustness. We manually curate our clean training data, which is of special significance in low resource settings. We show the utility of AfroLID in the wild by applying it on two Twitter datasets and compare its performance with existing LID tools that cover any number of African languages such as CLD2 (McCandless, 2010), CLD3 (Salcianu et al., 2018), Franc, LangDetect (Shuyo, 2010), and Langid.py (Lui and Baldwin, 2012). Our results show that AfroLID consistently outperforms all other LID tools for almost all languages, and serves as the new SOTA for language identification for African languages. + +To summarize, we offer the following main con + +![](images/dcc27a52a59c9e541ea73e9019c71b5215540bdebe229376fe678a5f74516f65.jpg) +Figure 2: Examples from the five scripts in our data. + +tributions: + +1. We develop AfroLID, a SOTA LID tool for 517 African languages and language varieties. To facilitate NLP research, we make our models publicly available. +2. We carry out a study of LID tool performance on African languages where we compare our models in controlled settings with several tools such as CLD2, CLD3, Franc, LangDetect, and Langid.py. +3. Our models exhibit highly accurate performance in the wild, as demonstrated by applying AfroLID on Twitter data. +4. We provide a wide range of controlled case studies and carry out a linguistically-motivated error analysis of AfroLID. This allows us to motivate plausible directions for future research, including potentially beyond African languages. + +The rest of the paper is organized as follows: In Section 2 we discuss a number of typological features of our supported languages. We describe AfroLID's training data in Section 3. Next, we introduce AfroLID in 4. This includes our experimental datasets and their splits, preprocessing, vocabulary, implementation and training details, and our evaluation settings. We present performance of AfroLID in Section 5 and compare it to other LID tools. Our analysis show that AfroLID outperforms other models for most languages. In the same section, we also describe the utility of AfroLID on non-Latin scripts, Creole languages, and languages in close geographical proximity. Although AfroLID is not trained on Twitter data, we experiment with tweets in Section 6 in + +order to investigate performance of AfroLID in out of domain scenarios. Through two diagnostic studies, we demonstrate AfroLID's robustness. We provide an overview of related work in Section 7. We conclude in Section 8, and outline a number of limitations for our work in Section 9. + +# 2 Typological Information + +Language Families. We experiment with 517 African languages and language varieties across 50 African countries. These languages belong to 14 language families (Eberhard et al., 2021) as follows: Afro-Asiatic, Austronesian, Creole (English based), Creole (French based), Creole (Kongo based), Creole (Ngbadi based), Creole (Portuguese based), Indo-European, Khoe-Kwadi (Hainum), Khoe-Kwadi (Nama), Khoe-Kwadi (Southwest), Niger-Congo, and Nilo-Saharan. The large and typologically diverse data we exploit hence endow our work with wide coverage. We show in Figure 1 a map of Africa with the countries AfroLID covers. We also show the number of languages we cover, per country, in Figure E in the Appendix. Table E.1, Table E.2, and Table E.3 in the Appendix also provide a list of the languages AfroLID handles. We represent the languages using ISO-3 codes for both individual languages and macro-languages. We use a macro-language tag when the language is known but the specific dialect is unknown. For this reason we specify that AfroLID supports 517 African languages and language varieties. + +Sentential Word Order. There are seven categories of word order across human languages around the world. These are subject-verb-object (SVO), subject-object-verb (SOV), object-verb-subject (OVS), object-subject-verb (OSV), verb-object-subject (VOS), verb-subject-object (VSO), and languages lacking a dominant order (which often have a combination of two or more orders within its grammar) (Dryer and Haspelmath, 2013). Again, our dataset is very diverse: we cover five out of these seven types of word order. Table 1 shows sentential word order in our data, with some representative languages for each category. + +Diacritics. Diacritic marks are used to overcome the inadequacies of an alphabet in capturing important linguistic information by adding a distinguishing mark to a character in an alphabet. Diacritics are often used to indicate tone, length, case, nasalization, or even to distinguish different letters of a + +
Word OrderExample Languages
SVOXhosa, Zulu, Yorùbá
SOVKhoekhoe, Somali, Amharic
VSOMurle, Kalenjin
VOSMalagasy
No-dominant-orderSiswati, Nyamwezi, Bassa
+ +language's alphabet (Wells, 2000; Hyman, 2003; Creissels et al., 2008). Diacritics can be placed above, below or through a character. Diacritics are common features of the orthographies of African languages. Out of 517 languages/language varieties in our training data, 295 use some diacritics in their orthographies. We also provide a list of languages with diacritics in our training data in Table C.3 in the Appendix. + +Table 1: Sentential word order in our data. + +
ScriptLanguages
EthiopicAmharic, Basketo, Maale, +*Oromo, Sebat Bet Gurage +Tigrinya, Xamtanga
ArabicFulfude Adamawa, Fulfude Caka +Tarifit
VaiVai
CopticCoptic
+ +Table 2: Non-Latin scripts in AfroLID data. *Oromo: is available in Latin script as well. + +Scripts. Our dataset consists of 14 languages written in four different non-Latin scripts and 499 languages written in Latin scripts. The non-Latin scripts are Ethiopic, Arabic, Vai, and Coptic. + +# 3 Curating an African Language Dataset + +AfroLID is trained using a multi-domain, multiscript language identification dataset that we manually curated for building our tool. To collect the dataset, we perform an extensive manual analysis of African language presence on the web, identifying as much publicly available data from the 517 language varieties we treat as is possible. We adopt this manual curation approach since there are only few African languages that have any LID tool coverage. In addition, available LID tools that treat African languages tend to perform unreliable (Kreutzer et al., 2021). We therefore consult research papers focusing on African languages, such as (Adebara and Abdul-Mageed, 2022), or provide language data (Muhammad et al., 2022; Alabi et al., 2020), sifting through references to find additional African data sources. Moreover, + +we search for newspapers across all 54 African countries. $^{4}$ We also collect data from social media such as blogs and web fora written in African languages as well as databases that store African language data. These include LANAFRICA, SADi-LaR, Masakhane, Niger-Volta-LTI, and ALTI. Our resulting multi-domain dataset contains religious texts, government documents, health documents, crawls from curated web pages, news articles, and existing human-identified datasets for African languages. As an additional sanity check, we ask a number of native speakers from a subset of the languages to verify the correctness of the self-labels assigned in respective sources within our collections. $^{5}$ Our manual inspection step gave us confidence about the quality of our dataset, providing near perfect agreement by native speakers with labels from data sources. In total, we collect 100 million sentences in 528 languages across 14 language families in Africa and select 517 languages which had at least 2000 sentences. Again, the dataset has various orthographic scripts, including 499 languages in Latin scripts, eight languages in Ethiopic scripts, four languages in Arabic scripts, one language in Vai scripts, and one in Coptic scripts. + +# 4 AfroLID + +Experimental Dataset and Splits. From our manually-curated dataset, we randomly select 5,000, 50, and 100 sentences for train, development, and test, respectively, for each language. Overall, AfroLID data comprises 2,496,980 sentences for training (Train), 25,850 for development (Dev), and 51,400 for test (Test) for 517 languages and language varieties. + +Preprocessing. We ensure that our data represent naturally occurring text by performing only minimal preprocessing. Specifically, we tokenize our data into character, byte-pairs, and words. We do not remove diacritics and use both precomposed and decomposed characters to cater for the inconsistent use of precomposed and decomposed characters by many African languages in digital media.[7] + +We create our character level tokenization scripts and generate our vocabulary using Fairseq. We use sentencepiece tokenizer for the word level and byte-pair tokens before we preprocess in Fairseq. + +Vocabulary. We experiment with byte-pair (BPE), word, and character level encodings. We used vocabulary sizes of 64K, 100K, and 2, 260 for the bpe, word, and character level models across the 517 language varieties. The characters included both letters, diacritics, and symbols from other non-Latin scripts for the respective languages. + +![](images/238c572ec35bc099003e760abc83a76926bdf8ff36ea88e786ff3d23ea453744.jpg) +Figure 3: $F_{1}$ distribution on AfroLID Dev set. + +Implementation. AfroLID is built using a Transformer architecture trained from scratch. We use 12 attention layers with 12 heads in each layer, 768 hidden dimensions, making up $\sim$ 200M parameters. $^{8}$ + +Hyperparameter Search and Training. To identify our best hyperparameters, we use a subset of our training data and the full development set for our hyperparameter search. Namely, we randomly sample 200 examples from each language in our training data to create a smaller train set, while using our full Dev set. We train for up to 100 epochs, with early stopping. We search for the following hyperparameter values, picking bolded ones as our best: dropout rates from the set $\{0.1, 0.2, 0.3, 0.4, 0.5\}$ , learning rates from $\{5e-5, 5e-6\}$ , and patience from $\{10, 20, 30\}$ . Other hyperparameters are similar to those for XML-R (Conneau et al., 2020). We perform hyperparameter search only with our character level model and use identified values with both the BPE and word models. + +Evaluation. We report our results in both macro $F_{1}$ -score and accuracy, selecting our best model on + +![](images/e3699c24a4d97144091c183c264ac19c6fe05d496b8b6ff00b42d432ed1e150e.jpg) +Figure 4: $F_{1}$ distribution on AfroLID Test set. + +Dev based on $F_{1}$ . For all our models, we report the average of three runs. + +# 5 Model Performance and Analysis + +As Table 3 shows, our BPE model outperforms both the char and word models on both Dev and Test data. On Dev, our BPE model acquires 96.14 $F_{1}$ and 96.19 acc, compared to 85.75 $F_{1}$ and 85.85 for char model, and 90.22 $F_{1}$ and 90.34 acc for word model, respectively. Our BPE model similarly excels on Test, with 95.95 $F_{1}$ and 96.01 acc. We inspect the distribution of $F_{1}$ on the entire Dev and Test sets using our BPE model, as shown in Figures 3 and 4. As annotated on Figure 3, a total of 212 languages out of the 517 (\% = 41) are identified with 100 $F_{1}$ , 197 languages (\% = 38.10) identified with 95 and 99 $F_{1}$ , and 69 languages (\% = 13.30) identified with 90-95 $F_{1}$ . For Test data (Figure 4), on the other hand, 128 (\% = 24.75) languages are identified with 100 $F_{1}$ , 299 languages (\% = 57.83) are between 95-99 $F_{1}$ , while 56 languages (\% = 10.83) are between 90-95 $F_{1}$ . + +
ModelSplitF1-scoreAccuracyCheckpoint
CharDev85.7585.8569
Test81.2081.30
BPEDev96.1496.1973
Test95.9596.01
WordDev90.2290.3465
Test89.0489.01
+ +AfroLID in Comparison Using our Dev and Test data, we compare our best AfroLID model (BPE model) with the following LID tools: CLD2, CLD3, Franc, LangDetect, and Langid.py. Since these tools do not support all our AfroLID languages, we compare accuracy and $F_{1}$ -scores of our models only on languages supported by each + +of these tools. As Tables A.1 and 4 show, AfroLID outperforms other tools on 7 and 8 languages out of 16 languages on the Dev set and Test set, respectively. We also compare $F_{1}$ -scores of Franc on the 88 African languages Franc supports with the $F_{1}$ -scores of AfroLID on those languages. As shown in Tables 5 and 6, AfroLID outperforms Franc on 78 languages and has similar $F_{1}$ -score on five languages on the Dev set. AfroLID also outperforms Franc on 76 languages, and has similar $F_{1}$ -score on five languages on the Test set. + +Table 3: Results on the BPE, word level, and character level models. Bolded: best result on Test. Underlined: best result on Dev. + +
Lang.CLD2CLD3Langid.pyLangDetectFrancAfroLID
afr94.0091.0069.0088.2381.0097.00
amh-97.00100.00-35.0097.00
hau-83.00--77.0088.00
ibo-96.00--88.0097.00
kin92.00-45.00-47.0089.00
lug84.00---64.0087.00
mlg-100.0098.00--100.00
nya-96.00--75.0092.00
sna-100.00--91.0097.00
som-92.00--89.0095.00
sot-99.00--93.0088.00
swa99.0091.0090.00100.00-92.00
swc93.0094.0096.0097.02-87.00
swh89.0092.0088.2387.1970.0077.00
xho-59.0088.00-30.0067.00
yor-25.00--66.0098.00
zul-89.0020.00-40.0050.00
+ +Table 4: A comparison of results on AfroLID with CLD2, CLD3, Langid.py, LangDetect, and Franc using $F_{1}$ -score on the Test set. - indicates that the tool does not support the language. + +Effect of Non-Latin Script. We investigate performance of AfroLID on languages that use one of Arabic, Ethiopic, Vai, and Coptic scripts. Specifically, we investigate performance of AfroLID on Amharic (amh), Basketo (bst), Maale (mdy), Sebat Bet Gurage (sgw), Tigrinya (tir), Xamtanga (xan), Fulfude Adamawa (fub), Fulfude Caka (fuv), Tarif (rif), Vai (vai), and Coptic (cop).10 Vai and Coptic, the two unique scripts in AfroLID have an $F_{1}$ -score of 100 each. This corroborates research findings that languages written in unique scripts within an LID tool can be identified with up to $100\%$ recall, $F_{1}$ -score, and/or accuracy even using a small training dataset (Jauhiainen et al., 2017a). We assume this to be the reason Langid.py outperforms AfroLID on Amharic as seen in Table 4, since Amharic is the only language that employs an Ethiopic script in langid.py. AfroLID, on the other hand, has 8 languages using Ethiopic scripts. However, it is not clear why Basketo, which uses Ethiopic scripts has $100 F_{1}$ -score. We, how + +
ISO-3AfroLIDFrancISO-3AfroLIDFrancISO-3AfroLIDFrancISO-3AfroLIDFrancISO-3AfroLIDFranc
aar100.0074.50fat94.1188.23koo96.0786.27nso84.3170.58tir98.03100.00
ada98.0396.07fon98.0386.27kqn96.0786.27nya96.0782.35tiv100.0098.03
afr94.1184.31fuf98.0360.78kqs100.0064.70nym100.0052.94toi100.0068.62
amh98.0325.49fuv90.1935.29ktu96.0717.64ynn92.1584.31tsn70.5854.90
bam70.5845.09gaa96.0796.07lia98.0398.03nzi98.0398.03tso96.0780.39
bba98.0388.23gaz96.0790.19lin98.0396.07pcm98.0378.43twi90.1984.31
bci76.4786.27gjn100.0094.11lot100.0094.11pov96.0786.27umb90.1970.58
bem82.3564.70gkp64.7068.62loz96.0794.11run84.3158.82vai100.00100.00
bfa100.0090.19hau94.1182.35lua98.0396.07sag94.1117.64ven96.0796.07
bin94.1198.03ibb98.0386.27lue90.1960.78shk100.0096.07vmw88.2380.39
bum100.0052.94ibo94.1190.19lug86.2752.94sna96.0780.39wol68.6223.52
cjk98.0352.94kbp98.0394.11lun98.0390.19som98.0396.07xho82.3564.70
crs94.1182.35kde96.0778.43men98.0392.15sot76.4790.19xsm100.0025.49
dag96.0796.07kdh100.0092.15mfq96.0701.96ssw90.1984.31yor100.0039.21
dga100.0088.23kea98.033.92mos94.1184.31suk100.0031.37zdj100.0062.74
dip98.0384.31kin80.3952.94nba100.0056.86sus100.0096.07zul58.8237.25
dyu98.0301.96kmb100.0080.39nbl80.3964.70swh74.5072.54
ewe94.1196.07kng98.0366.66ndo90.1982.35tem96.0784.31
AfroLID Average F1-score: 93.21
Franc Average F1-score: 72.85
+ +Table 5: ${F}_{1}$ -scores on our Dev dataset for languages in AfroLID and Franc for 88 languages. + +
ISO-3AfroLIDFrancISO-3AfroLIDFrancISO-3AfroLIDFrancISO-3AfroLIDFrancISO-3AfroLIDFranc
aar96.0074.00fat98.0094.00koo96.0096.00nso83.0059.00tir99.0097.00
ada100.0098.00fon97.0092.00kqn98.0084.00nya92.0075.00tiv100.0099.00
afr97.0081.00fuf93.0052.00kqs95.0073.00nym99.0054.00toi98.0080.00
amh97.0036.00fuv94.0061.00ktu93.0019.00nyn92.0092.00tsn76.0033.00
bam70.0030.00gaa95.0097.00lia97.00100.00nzi97.0098.00tso99.0094.00
bba100.0083.00gaz94.0096.00lin99.0098.00pcm96.0082.00twi100.0087.00
bci98.0092.00gjn98.0099.00lot99.0093.00pov93.0082.00umb99.0076.00
bem94.0090.00gkp63.0069.00loz95.0092.00run91.0068.00vai100.00100.00
bfa99.0091.00hau88.0077.00lua99.0087.00sag100.0030.00ven95.0085.00
bin99.0097.00ibb98.0084.00lue95.0068.00shk100.0093.00vmw97.0095.00
bum97.0072.00ibo97.0088.00lug87.0064.00sna97.0091.00wol81.0021.00
cjk96.0056.00kbp100.0098.00lun97.0086.00som95.0089.00xho67.0030.00
crs96.0083.00kde95.0060.00men98.0099.00sot88.0093.00xsm99.0053.00
dag100.00100.00kdh99.0095.00mfq95.0088.00ssw86.0068.00yor98.0066.00
dga100.0078.00kea96.070.00mos97.0090.00suk99.0034.00zdj96.0063.00
dip93.0086.00kin89.0047.00nba99.0061.00sus99.0096.00zul50.0040.00
dyu96.0000.00kmb94.0071.00nbl74.0047.00swh77.0070.00
ewe97.0097.00kng98.0058.00ndo96.0076.00tem99.0088.00
AfroLID Average F1-score: 91.63
+ +Table 6: $F_{1}$ -scores on our Test dataset for languages in AfroLID and Franc for 88 languages. + +ever, found errors in Amharic, Sebat Bet Gurage, and Xamtanga (which use Ethiopic scripts) as well as Fulfude Adamawa, and Fulfude Caka (which use Arabic scripts). We find that languages using Ethiopic scripts are often confused with those using Ethiopic scripts (except for $2\%$ of the time when Amharic is labelled as Wolof). We categorize this example under "others" in Figure 5 and B.1. On the other hand, Fulfude languages are wrongly labelled as other dialects of Fulfude that use Latin scripts. We visualize further details of the errors in Figure B.1 (in Appendix) and 5 for our Dev and Test sets. + +Creole Languages. We investigate performance of AfroLID on Creole languages. Creole languages are vernacular languages that emerged as a result of trade interactions between speakers of mutually unintelligible languages (Lent et al., 2022). A Creole language therefore shares lexical items and grammatical structures with one or more dif + +![](images/2d4241a56b1a7da68b761bb414a22b08543f7141d50cf3d60631c6e135d5e82e.jpg) +Figure 5: Errors on the different script in AfroLID Test set. We use ISO-3 codes to represent the languages. "Others" refers to languages AfroLID identifies as outside the list of languages selected for analysis. + +ferent, unrelated languages. As a result, Creole languages appear to be code-mixed. AfroLID is trained on nine Creole languages: Krio, Nigerian + +Pidgin, Cameroonian Pidgin, Seychelles Creole, Mauritian Creole, Kituba, Sango, Kabuverdianu, and Guinea-Bissau Creole. Krio, Cameroonian Pidgin, and Nigerian Pidgin are English based. Seychelles Creole and Mauritian Creole are French based. Kituba is Kongo based and Sango is Ngbadi based. Kabuverdianu and Guinea-Bissau Creole are Portuguese based. Evaluating AfroLID on Creoles thus demonstrates the robustness of our model, since (as mentioned above) Creoles can be viewed as a type of code-mixed language. We show performance of AfroLID on the nine Creole languages in Figure B.2 (in Appendix) and 6 for Dev and Test sets respectively. + +![](images/7571706f7f697f4918d426bbff79b457a18e1237f74cf04ec898ecdbdfba443e.jpg) +Figure 6: Errors on the different Creoles in AfroLID. We use ISO-3 codes to represent the languages. "Others" refers to languages AfroLID identifies as outside the list of languages selected for analysis. + +We find that Guinea-Bissau Creole (pov), which is Portuguese based, is wrongly labelled as Kabu-verdianu (kea) another Portuguese based Creole $1\%$ of the time. Cameroonian pidgin (wes) is also wrongly labelled as Nigerian pidgin (pcm) $7\%$ of the time. Since both Cameroonian and Nigerian Pidgin are English based, we assume lexical and/or grammatical similarities are responsible for these errors. It is also interesting to find cases where the wrong labels are languages spoken in the same geographical regions as the Creoles. For example, Kituba is wrongly labelled as Yombe, and both languages are spoken in Congo. Mauritian Creole (mfe), which is French based, is also wrongly labelled as Seychelles Creole (crs, another French based Creole) and two Indigenous languages spoken in Francophone Africa Ngiemboon, and Masana. We now further investigate the role of geographical proximity in our results. + +Effect of Geographic Proximity. We evaluate performance of AfroLID on languages that + +share a large number of lexical items, or those that are spoken within the same country. In this analysis, we focus on 10 South African languages: Afrikaans (afr), Ndebele (nbl), Sepedi (nso), Sotho (sot), Swati (ssw), Tswana (tsn), Tsonga (tso), Tsivenda (ven), Xhosa (xho), and Zulu (zul). We select South Africa because most South Africans are multi-lingual, and it is not uncommon to find code-mixing using a combination of Indigenous languages within the same text (Finlayson and Slabbert, 1997; Mabule, 2015). Figures B.3 (in Appendix) and 7 show the types of errors AfroLID makes in identifying these languages on our Dev and Test datasets respectively. We find that about $\sim 70\%$ of the errors are with other South African languages. Another $16\%$ are with dialects from neighbouring countries including Tswana, a dialect of Tsonga, Ndebele (Zimbabwe) similar to Zulu, and Ronga, a dialect of Tsonga.[11] We now provide a number of case studies we carry out to further probe AfroLID performance. + +![](images/a9125079b591559cd35cad6f6c38b77d2358c835e8ea7282977fffd86e71ff5c.jpg) +Figure 7: Errors on Indigenous South African languages in AfroLID Test data. "Others" refers to languages AfroLID identifies as outside the list of languages selected for analysis. + +# 6 Diagnostic Case Studies + +Although AfroLID is not trained on Twitter data, we evaluate its performance on Twitter to investigate the robustness of our models in out of domain scenarios. Namely, we carry out two diagnostic case studies using Twitter data. In the first study, which we refer to as Twitter in the wild, we use unannotated Tweets crawled from the web. In the second, we use annotated tweets. We now turn to the details of these studies. + +
ToolCovered/AllTraining DataMethodology
Langid.py7/97GDoc, SDoc, News, ENC, ICNaive Bayes, n-gram
Langdetect3/49WikipediaNaive Bayes, char n-gram
CLD24/80UnknownNaïve Bayes
CLD313/107UnknownNeural network, char n-gram
Equilid1/70Several GDoc, SDoc, RDoc, News, ENC, IC, TwitterNeural seq2seq
Fasttext5/176Wiki, Tatoeba, SettimesClassifier+ hierarchical. softmax, n-grams
Franc88/403UDHRN-grams
AfroLID517/517Several GDoc, SDoc, RDoc, News, ENC, ICTransformer
+ +Table 7: AfroLID in comparison. Covered/All: # of African lgs compared with covered lgs, GDoc: Gov docs, SDoc: Software docs, RDoc:Religious docs, News: Newswire, ENC: online encyclopedia, IC: Internet crawl. + +# 6.1 Case Study I: AfroLID in the Wild + +In order to evaluate the utility of AfroLID in a real-world scenario, we collect 700M tweets from Africa. For this, we use Twitter streaming API from 2021 – 2022 with four geographical bounding boxes (central, eastern, western, and southern of Africa). We extract a random sample of 1M tweets from this larger Twitter dataset for our analysis. As is known, Twitter currently automatically labels a total of 65 languages. Only one of these languages, i.e., Amharic, is an African language in our 517 languages. In the 1M sample, 110 tweets were tagged as "Amharic" and 6,940 as "undefined" by Twitter. We run our model on the "undefined" data. In all, the 6,940 tweets were identified as belonging to 242 African languages by AfroLID. Since the Tweets we used were unannotated, we are not able to determine the number of tweets wrongly classified by AfroLID for each language. For this reason, we only evaluate a subset of the predicted languages: we ask native speakers of three languages (Yorubá, Hausa, and Nigerian Pidgin) to help identify each tweet that was classified by AfroLID as belonging to their language. We provide details of this annotation study and examples of annotated samples in Table D.1 (Appendix D). We find that AfroLID is able to correctly identify Yorubá both with and without diacritics and code-mixed examples. A total of 16 tweets are classified as Yorubá by AfroLID, of which 7 are correct (43.75%), 2 are mixed with English, and 7 are wrongly labelled. Of the wrongly labelled tweets, one is identified as Nigerian Pidgin, while the others are unknown languages. For Nigerian Pidgin, of the 28 tweets predicted, 2 are correct (12.50%), 1 is mixed with an unknown language, and the others are wrongly classified. We find that in most cases, tweets classified as Nigerian pidgin are code-mixed with English and another Indigenous language. This gives + +us indication that AfroLID identifies Nigerian Pidgin as an English-based Creole. Finally, a total of 333 tweets are classified as Hausa. Of these, 105 examples are correct (37.50%), 18 are mixed, while the others are wrongly labeled. + +# 6.2 Case Study II: AfroLID on AfriSenti + +We also test performance of AfroLID on the recently released AfriSenti Twitter dataset of African languages. AfriSenti (Muhammad et al., 2022; Yimam et al., 2020) contains $\sim 56,000$ tweets annotated for sentiment in Amharic, Hausa, Igbo, Nigerian Pidgin, Swahili, and Yorubá. We run AfroLID and Franc tool on AfriSenti. As Figure 8 shows, AfroLID outperforms Franc on all languages except Nigerian Pidgin. We assume this is because Franc supports English and may have learnt some lexical / grammatical information from English to aid the identification of Nigerian Pidgin (although AfroLID outperforms Franc on Nigerian Pidgin on our Dev and Test as shown in Table 5 and 6. + +![](images/3aef5afc3485c8dca4a645d0c8569f15b0c0afa6a4e1231d2a282e7b0740f98f.jpg) +Figure 8: Performance of AfroLID and Franc on Afri-senti using $F_{1}$ -score. + +# 7 Related Work + +LID tools are often used to select data to pre-train language models (Buck et al., 2014a) and, more generally, develop multilingual corpora (Buck et al., 2014b; Dunn, 2020; Scannell, 2007; Ortiz Suárez et al., 2019). For many languages, including African languages, LID tools are either not available or perform poorly (Kreutzer et al., 2021; Caswell et al., 2020). A few works, however, have already focused on African language identification. For example, Asubiaro et al. (2018) cover Yorubá, Hausa, and Igbo. Similarly, Duvenhage et al. (2017b); Dube and Suleman (2019) treat 10 Indigenous South African official languages. In addition, a handful of other African languages are covered in LID tools such as CLD2 (McCandless, 2010), CLD3 (Salcianu et al., 2018), Equilid (Jurgens et al., 2017), FastText, Franc, LangDetect (Shuyo, 2010) and Langid.py (Lui and Baldwin, 2012) and works such as Abdul-Mageed et al. (2020, 2021) and Nagoudi et al. (2022). We provide an extended literature review of language identification, related tools, as well as data and methods employed in Appendix C. We also provide a comparison between available LID tools in terms of training data, methodology, and number of covered African languages in Table 7. To the best of our knowledge, AfroLID is the first publicly available LID tool covering a large number of African languages and varieties $(n=517)$ . + +# 8 Conclusion + +We introduced our novel African language identification tool, AfroLID. To the best of our knowledge, AfroLID is the first publicly available tool that covers a large number of African languages and language varieties. AfroLID also has the advantages of wide geographical coverage (50 African countries) and linguistic diversity. We demonstrated the utility of AfroLID on non-Latin scripts, Creoles, and languages with close geographical proximity. We also empirically showed AfroLID's superiority to five available tools, including in performance in the wild as applied to the much-needed Twitter domain. In the future, we plan to extend AfroLID to cover the top 100 most popular languages of the world as well as code-switched texts. + +# 9 Limitations + +We can identify a number of limitations for our work, as follows: + +- AfroLID does not cover high-resource, popular languages that are in wide use by large populations. This makes it insufficient as a stand-alone tool in real-world scenarios where many languages are used side-by-side. Extending AfroLID to more languages, however, should be straightforward since training data is available. Indeed, it is our plan to develop AfroLID in this direction in the future. + +- AfroLID recognizes only Indigenous African languages in monolingual settings. This limits our tool's utility in code-mixed scenarios, (although Creoles are like code-mixed languages). This is undesirable especially because many African languages are commonly code-mixed with foreign languages due to historical reasons (Adebara and Abdul-Mageed, 2022). Again, to improve accuracy in the future, it would be beneficial to add foreign languages support in code-mixed settings such as with English, French, and Portuguese. + +- Although we strive to test AfroLID in real-world scenarios, we were not able to identify native speakers except from a small number of languages. In the future, we plan to work more with the community to enable wider analyses of our predictions. + +# 10 Ethical Considerations + +Although LID tools are useful for a wide range of applications, they can also be misused. We release AfroLID hoping that it will be beneficial to wide audiences such as to native speakers in need of better services like health and education. Our tool is also developed using publicly available datasets that may carry biases. Although we strive to perform analyses and diagnostic case studies to probe performance of our models, our investigations are by no means comprehensive nor guarantee absence of bias in the data. In particular, we do not have access to native speakers of most of the languages covered in AfroLID. This hinders our ability to investigate samples from each (or at least the majority) of the languages. We hope that future users of the tool will be able to make further investigations to uncover AfroLID's utility in wide real-world situations. + +# Acknowledgements + +We gratefully acknowledge support from Canada Research Chairs (CRC), the Natural Sciences and Engineering Research Council of Canada (NSERC; RGPIN-2018-04267), the Social Sciences and Humanities Research Council of Canada (SSHRC; 435-2018-0576; 895-2020-1004; 895-2021-1008), Canadian Foundation for Innovation (CFI; 37771), Digital Research Alliance of Canada, UBC ARC-Sockeye, Advanced Micro Devices, Inc. (AMD), and Google. Any opinions, conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of CRC, NSERC, SSHRC, CFI, CC, AMD, Google, or UBC ARC-Sockeye. + +# References + +Muhammad Abdul-Mageed, AbdelRahim Elmadany, and El Moatez Billah Nagoudi. 2021. ARBERT & MARBERT: Deep bidirectional transformers for Arabic. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 7088-7105, Online. Association for Computational Linguistics. +Muhammad Abdul-Mageed, Chiyu Zhang, AbdelRahim Elmadany, and Lyle Ungar. 2020. Toward microdialect identification in diaglossic and code-switched environments. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5855-5876, Online. Association for Computational Linguistics. +Ife Adebara and Muhammad Abdul-Mageed. 2022. Towards Afrocentric NLP for African languages: Where we are and where we can go. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3814-3841, Dublin, Ireland. Association for Computational Linguistics. +Wafia Adouane and Simon Dobnik. 2017. Identification of languages in Algerian Arabic multilingual documents. In Proceedings of the Third Arabic Natural Language Processing Workshop, pages 1-8, Valencia, Spain. Association for Computational Linguistics. +Jesujoba Alabi, Kwabena Amponsah-Kaakyire, David Adelani, and Cristina Espana-Bonet. 2020. Massive vs. curated embeddings for low-resourced languages: the case of Yorubá and Twi. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 2754-2762, Marseille, France. European Language Resources Association. +12https://alliancecan.ca +13 https://arc.ubc.ca/ubc-arc-sockeye + +Toluwase Asubiaro, Tunde Adegbola, Robert Mercer, and Isola Ajiferuke. 2018. A word-level language identification strategy for resource-scarce languages. Proceedings of the Association for Information Science and Technology, 55(1):19-28. +Dzmitry Bahdanau, Kyung Hyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. 3rd International Conference on Learning Representations, ICLR 2015; Conference date: 07-05-2015 Through 09-05-2015. +Timothy Baldwin and Marco Lui. 2010. Language identification: The long and the short of the matter. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 229-237, Los Angeles, California. Association for Computational Linguistics. +Yves Bestgen. 2017. Improving the character ngram model for the DSL task with BM25 weighting and less frequently used feature sets. In Proceedings of the Fourth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial), pages 115-123, Valencia, Spain. Association for Computational Linguistics. +Su Lin Blodgett, Johnny Wei, and Brendan O'Connor. 2017. A dataset and classifier for recognizing social media English. In Proceedings of the 3rd Workshop on Noisy User-generated Text, pages 56-61, Copenhagen, Denmark. Association for Computational Linguistics. +Ralf D. Brown. 2013. Selecting and weighting n-grams to identify 1100 languages. In Text, Speech, and Dialogue, pages 475–483, Berlin, Heidelberg. Springer Berlin Heidelberg. +Christian Buck, Kenneth Heafield, and Bas van Ooyen. 2014a. N-gram counts and language models from the Common CWEal. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 3579-3584, Reykjavik, Iceland. European Language Resources Association (ELRA). +Christian Buck, Kenneth Heafield, and Bas van Ooyen. 2014b. N-gram counts and language models from the common crawl. In Proceedings of the Language Resources and Evaluation Conference, Reykjavik, Iceland. +Isaac Caswell, Theresa Breiner, Daan van Esch, and Ankur Bapna. 2020. Language ID in the wild: Unexpected challenges on the path to a thousand-language web text corpus. In Proceedings of the 28th International Conference on Computational Linguistics, pages 6588-6608, Barcelona, Spain (Online). International Committee on Computational Linguistics. +William B. Cavnar and John M. Trenkle. 1994. N-gram-based text categorization. In In Proceedings of SDAIR-94, 3rd Annual Symposium on Document Analysis and Information Retrieval, pages 161-175. + +Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724-1734, Doha, Qatar. Association for Computational Linguistics. +Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishray Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettle-moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8440-8451, Online. Association for Computational Linguistics. +Denis Creissels, Gerrit J Dimmendaal, Zygmunt Frajzyngier, and Christa König. 2008. Africa as a morphosyntactic area. A linguistic geography of Africa, 86150. +N. Dongen. 2017. Analysis and prediction of Dutch-English code-switching in Dutch social media messages. +Matthew S. Dryer and Martin Haspelmath, editors. 2013. WALS Online. Max Planck Institute for Evolutionary Anthropology, Leipzig. +Meluleki Dube and Hussein Suleman. 2019. Language identification for South African Bantu languages using rank order statistics. In Digital Libraries at the Crossroads of Digital Information for the Future: 21st International Conference on Asia-Pacific Digital Libraries, ICADL 2019, Kuala Lumpur, Malaysia, November 4-7, 2019, Proceedings, page 283-289, Berlin, Heidelberg. Springer-Verlag. +Jonathan Dunn. 2020. Mapping languages: the corpus of global language use. *Language Resources and Evaluation*, 54(4). +Bernardt Duvenhage, Mfundo Ntini, and Phala Ramonyai. 2017a. Improved text language identification for the South African languages. In 2017 Pattern Recognition Association of South Africa and Robotics and Mechatronics (PRASA-RobMech), pages 214-218. IEEE. +Bernardt Duvenhage, Mfundo Ntini, and Phala Ramonyai. 2017b. Improved text language identification for the South African languages. In 2017 Pattern Recognition Association of South Africa and Robotics and Mechatronics (PRASA-RobMech), pages 214-218. +David M Eberhard, F Simons Gary, and Charles D Fenning (eds). 2021. Ethnologue: Languages of the world. Twenty-fourth edition, Dallas, Texas: SIL International. + +Ahmed El-Kishky, Vishrav Chaudhary, Francisco Guzmán, and Philipp Koehn. 2020. CCAIsigned: A massive collection of cross-lingual web-document pairs. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020), pages 5960–5969, Online. Association for Computational Linguistics. +Miquel Esplà, Mikel Forcada, Gema Ramírez-Sánchez, and Hieu Hoang. 2019. ParaCrawl: Web-scale parallel corpora for the languages of the EU. In Proceedings of Machine Translation Summit XVII: Translatorator, Project and User Tracks, pages 118-119, Dublin, Ireland. European Association for Machine Translation. +Rosalie Finlayson and Sarah Slabbert. 1997. "We just mix": code switching in a South African township. 1997(125):65-98. +Spandana Gella, Kalika Bali, and Monjit Choudhury. 2014. "ye word kis lang ka hai bhai?" testing the limits of word level language identification. In Proceedings of the 11th International Conference on Natural Language Processing, pages 368-377, Goa, India. NLP Association of India. +Spandana Gella, Jatin Sharma, and Kalika Bali. 2013. Query word labeling and back transliteration for Indian languages: Shared task system description. In Working Notes - Forum for Information Retrieval Evaluation (FIRE) 2013 Shared Task. Best Performing System at FIRE-2013. +Helena Gomez, Ilia Markov, Jorge Baptista, Grigori Sidorov, and David Pinto. 2017. Discriminating between similar languages using a combination of typed and untyped character n-grams and words. In Proceedings of the Fourth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial), pages 137–145, Valencia, Spain. Association for Computational Linguistics. +Lena Grothe, Ernesto William De Luca, and Andreas Nurnberger. 2008. A comparative study on language identification methods. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08), Marrakech, Morocco. European Language Resources Association (ELRA). +Gualberto A. Guzman, Jacqueline Serigos, Barbara E. Bullock, and Almeida Jacqueline Toribio. 2016. Simple tools for exploring variation in code-switching for linguists. In Proceedings of the Second Workshop on Computational Approaches to Code Switching, pages 12-20, Austin, Texas. Association for Computational Linguistics. +Larry M Hyman. 2003. African languages and phonological theory. *Glot International*, 7(6):153-163. +Tommi Jauhiainen, Heidi Jauhiainen, Niko Partanen, and Krister Lindén. 2020. Uralic language identification (ULI) 2020 shared task dataset and the wanca 2017 corpora. In Proceedings of the 7th Workshop on NLP for Similar Languages, Varieties and Dialects, pages 173-185, Barcelona, Spain (Online). + +International Committee on Computational Linguistics (ICCL). +Tommi Jauhiainen, Krister Lindén, and Heidi Jauhainen. 2017a. Evaluation of language identification methods using 285 languages. In Proceedings of the 21st Nordic Conference on Computational Linguistics, pages 183-191, Gothenburg, Sweden. Association for Computational Linguistics. +Tommi Jauhiainen, Krister Lindén, and Heidi Jauhiainen. 2017b. Evaluating heli with non-linear mappings. pages 102-108. +Tommi Jauhiainen, Marco Lui, Marcos Zampieri, Timothy Baldwin, and Krister Lindén. 2019. Automatic language identification in texts: A survey. J. Artif. Int. Res., 65(1):675-682. +Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Herve Jégou, and Tomas Mikolov. 2016. Fasttext.zip: Compressing text classification models. arXiv preprint arXiv:1612.03651. +Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efficient text classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 427-431, Valencia, Spain. Association for Computational Linguistics. +David Jurgens, Yulia Tsvetkov, and Dan Jurafsky. 2017. Incorporating dialectal variability for socially equitable language identification. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 51-57. +Julia Kreutzer, Isaac Caswell, Lisa Wang, Ahsan Wahab, Daan van Esch, Nasanbayar Ulzii-Orshikh, Allahsera Tapo, Nishant Subramani, Artem Sokolov, Claytone Sikasote, Monang Setyawan, Supheakmungkol Sarin, Sokhar Samb, Benoit Sagot, Clara Rivera, Annette Rios, Isabel Papadimitriou, Salomey Osei, Pedro Ortiz Suarez, Iroro Orife, Kelechi Ogueji, Andre Niyongabo Rubungo, Toan Q. Nguyen, Mathias Muller, Andre Muller, Shamsuddeen Hassan Muhammad, Nanda Muhammad, Ayanda Mnyakeni, Jamshidbek Mirzakhalov, Tapiwanashe Matangira, Colin Leong, Nze Lawson, Sneha Kudugunta, Yacine Jernite, Mathias Jenny, Orhan Firat, Bonaventure F. P. Dossou, Sakhile Dlamini, Nisansa de Silva, Sakine Çabuk Balli, Stella Biderman, Alessia Battisti, Ahmed Baruwa, Ankur Bapna, Pallavi Baljekar, Israel Abebe Azime, Ayodele Awokoya, Duygu Ataman, Orevaoghene Ahia, Oghenefego Ahia, Sweta Agrawal, and Mofetoluwa Adeyemi. 2021. Quality at a glance: An audit of web-crawled multilingual datasets. arXiv preprint arXiv:2103.12028. +Chris van der Lee and Antal van den Bosch. 2017. Exploring lexical and syntactic features for language variety identification. In Proceedings of the Fourth Workshop on NLP for Similar Languages, Varieties + +and Dialects (VarDial), pages 190-199, Valencia, Spain. Association for Computational Linguistics. +Heather Lent, Emanuele Bugliarello, and Anders Søgaard. 2022. Ancestor-to-creole transfer is not a walk in the park. In Proceedings of the Third Workshop on Insights from Negative Results in NLP, pages 68-74, Dublin, Ireland. Association for Computational Linguistics. +Marco Lui and Timothy Baldwin. 2011. Cross-domain feature selection for language identification. In Proceedings of 5th International Joint Conference on Natural Language Processing, pages 553-561, Chiang Mai, Thailand. Asian Federation of Natural Language Processing. +Marco Lui and Timothy Baldwin. 2012. langid.py: An off-the-shelf language identification tool. In Proceedings of the ACL 2012 System Demonstrations, pages 25-30, Jeju Island, Korea. Association for Computational Linguistics. +DR Mabule. 2015. What is this? is it code switching, code mixing or language alternating? Journal of Educational and Social Research, 5(1). +Shervin Malmasi, Marcos Zampieri, Nikola Ljubesic, Preslav Nakov, Ahmed Ali, and Jorg Tiedemann. 2016. Discriminating between similar languages and Arabic dialect identification: A report on the third DSL shared task. In Proceedings of the Third Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial3), pages 1-14, Osaka, Japan. The COLING 2016 Organizing Committee. +Matej Martinc, Iza Skrjanec, Katja Zupan, and Senja Pollak. 2017. Pan 2017: Author profiling - gender and language variety prediction. In CLEF. +Michael McCandless. 2010. Accuracy and performance of google's compact language detector. Blog post. +Shamsuddeen Hassan Muhammad, David Ifeoluwa Adelani, Sebastian Ruder, Ibrahim Said Ahmad, Idris Abdulmumin, Bello Shehu Bello, Monojit Choudhury, Chris Chinenye Emezue, Saheed Salahudeen Abdullahi, Anuoluwapo Aremu, Alipio Jeorge, and Pavel Brazdil. 2022. Naijasenti: A nigerian twitter sentiment corpus for multilingual sentiment analysis. +El Moatez Billah Nagoudi, AbdelRahim Elmadany, and Muhammad Abdul-Mageed. 2022. AraT5: Text-to-text transformers for Arabic language generation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 628-647, Dublin, Ireland. Association for Computational Linguistics. +Pedro Javier Ortiz Suárez, Laurent Romary, and Benoit Sagot. 2020. A monolingual approach to contextualized word embeddings for mid-resource languages. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1703-1714, Online. Association for Computational Linguistics. + +Pedro Javier Ortiz Suárez, Benoit Sagot, and Laurent Romary. 2019. Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures. Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-7) 2019. Cardiff, 22nd July 2019, pages 9 - 16, Mannheim. Leibniz-Institut für Deutsche Sprache. +Muntsa Padro and Lluis Padro. 2004. Comparing methods for language identification. Proces. del Leng. Natural, 33. +Iria del Río Gayo, Marcos Zampieri, and Shervin Malmasi. 2018. A Portuguese native language identification dataset. In Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 291-296, New Orleans, Louisiana. Association for Computational Linguistics. +Alex Salcianu, Andy Golding, Anton Bakalov, Chris Alberti, Daniel Andor, David Weiss, Emily Pitler, Greg Coppola, Jason Riesa, Kuzman Ganchev, et al. 2018. Compact language detector v3. +Younes Samih. 2017. Dialectal Arabic processing Using Deep Learning. Ph.D. thesis. +Kevin P. Scannell. 2007. The Crúbadán project: Corpus building for under-resourced languages. +Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong, and Francisco Guzmán. 2021. WikiMatrix: Mining 135M parallel sentences in 1620 language pairs from Wikipedia. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1351-1361, Online. Association for Computational Linguistics. +Nakatani Shuyo. 2010. Language detection library for java. +Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, NIPS'14, page 3104-3112, Cambridge, MA, USA. MIT Press. +Liling Tan, Marcos Zampieri, Nikola Ljubesic, and Jörg Tiedemann. 2014. Merging comparable data sources for the discrimination of similar languages: The dsl corpus collection. In Proceedings of the 7th Workshop on Building and Using Comparable Corpora (BUCC), pages 11-15, Reykjavik, Iceland. +S. Thara and Prabaharan Poornachandran. 2021. Transformer based language identification for malayalam- english code-mixed text. IEEE Access, 9:118837- 118850. +Andros Tjandra, Diptanu Gon Choudhury, Frank Zhang, Kritika Singh, Alexis Conneau, Alexei Baevski, Assaf Sela, Yatharth Saraf, and Michael Auli. 2021. Improved language identification through cross-lingual self-supervised learning. + +Erik Tromp. 2011. Multilingual sentiment analysis on social media. +John Vogel and David Tresner-Kirsch. 2012. Robust language identification in short, noisy texts: Improvements to liga. In Proceedings of the 3rd international Workshop on Mining Ubiquitous and Social Environments, pages 1-9. +John C. Wells. 2000. Orthographic diacritics and multilingual computing. Language Problems and Language Planning, 24:249-272. +Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual pre-trained text-to-text transformer. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 483-498, Online. Association for Computational Linguistics. +Yonghong Yan and E. Barnard. 1995. An approach to automatic language identification based on language-dependent phone recognition. In 1995 International Conference on Acoustics, Speech, and Signal Processing, volume 5, pages 3511-3514 vol.5. +Seid Muhie Yimam, Hizkiel Mitiku Alemayehu, Abinew Ayele, and Chris Biemann. 2020. Exploring Amharic sentiment analysis from social media texts: Building annotation tools and classification models. In Proceedings of the 28th International Conference on Computational Linguistics, pages 1048-1060, Barcelona, Spain (Online). International Committee on Computational Linguistics. +Marcos Zampieri, Liling Tan, Nikola Ljubesic, and Jörg Tiedemann. 2014. A report on the DSL shared task 2014. In Proceedings of the First Workshop on Applying NLP Tools to Similar Languages, Varieties and Dialects, pages 58-67, Dublin, Ireland. Association for Computational Linguistics and Dublin City University. +Marcos Zampieri, Liling Tan, Nikola Ljubesic, Jorg Tiedemann, and Preslav Nakov. 2015. Overview of the DSL shared task 2015. In Proceedings of the Joint Workshop on Language Technology for Closely Related Languages, Varieties and Dialects, pages 1-9, Hissar, Bulgaria. Association for Computational Linguistics. +Marc A Zissman and Kay M Berkling. 2001. Automatic language identification. Speech Communication, 35(1):115-124. MIST. + +# Appendices + +# A Results of AfroLID on Dev Set + +We report results from comparing AfroLID with CLD2, CLD3, Langid.py, LangDetect, and Franc on our Dev set in Table A.1. + +
Lang.CLD2CLD3Langid.pyLangDetectFrancAfroLID
afr94.1188.2370.5892.1584.3194.11
amh-98.03100.00-25.4998.03
hau-86.27--82.3594.11
ibo-92.15--90.1994.11
kin88.23-56.86-52.9480.39
lug74.50---52.9486.27
mlg-98.0392.15--96.07
nya-96.07-82.3596.07
sna-86.27--80.3996.07
som-96.07--96.0798.03
sot-90.19--90.1976.47
swa92.1590.1986.2796.07-92.15
swc90.1996.0798.0398.03-74.50
swh88.2396.0790.1990.1972.5474.50
xho-90.1994.11-64.7082.35
yor-50.82--39.21100.00
zul-86.27-37.2558.82
+ +Table A.1: A comparison of results on AfroLID with CLD2, CLD3, Langid.py, LangDetect, and Franc using $F_{1}$ -score on the Dev set. A dash ("-") indicates that the tool does not support the language. + +# B Analysis of AfroLID + +We perform the experiments on non-Latin scripts, Creoles, and languages in close geographical proximity on the Dev set, as in Subsection 5. We show the results on the performance of AfroLID on non-Latin scripts in Table B.1, Creole languages in Table B.2 and geographical proximity in Table B.3 respectively. + +![](images/af09300fadb653f32281b9be5e4ca4d8470d459ce6f424a9200e611cefdcaa86.jpg) +Figure B.1: Errors on the different script in AfroLID Dev set. We use ISO-3 codes to represent the languages. "Others" refers to languages AfroLID identifies as outside the list of languages selected for analysis. + +![](images/55eee955d91d2bc04cf46299d05895da52120d0db6120dfebcf8edd12dd7eb80.jpg) +Figure B.2: Errors on the different Creoles in AfroLID. We use ISO-3 codes to represent the languages. "Others" refers to languages AfroLID identifies as outside the list of languages selected for analysis. + +![](images/854ae3c8cb40c2162b5c4e047e0e2602d3ea1f6f1330da8973397e03a32c3fd1.jpg) +Figure B.3: Errors on Indigenous South African languages in AfroLID Dev data. "Others" refers to languages AfroLID identifies as outside the list of languages selected for analysis. + +# C Extended Literature Review + +# C.1 Datasets + +Datasets for LID are often created using various genre of data for one or more languages. For multilingual LID, which is the focus of our work, documents are gathered from web pages containing multiple languages. Web pages for multilingual organizations are also often desirable because the same text is translated into various languages. Most datasets for multilingual LID cover European languages and many other high resource languages, making AfroLID dataset a significant contribution to AfricaNLP. To the best of our knowledge, AfroLID dataset is the first publicly available dataset for multilingual language identification for African languages. We provide details of some other publicly available corpora for LID. + +DSL Corpus Collection (Tan et al., 2014; Malmasi et al., 2016; Zampieri et al., 2015, 2014) is a multilingual collection of short excerpts of jour + +
COPLE2LEIRIAPEAPL2TOTAL
Sents1,0583304801,868
Tokens201,92157,358121,138380,417
Types9,3734,5046,80820,685
TTR0.050.080.060.05
+ +Table C.1: Distribution of the dataset: Number of texts, tokens, types, and type-token ratio (TTER) per source corpus. + +nalistic texts. It has been used as the main data set for the DSL shared tasks organized within the scope of the workshop on NLP for Similar languages, Varieties and Dialects (VarDial). It covers 22 languages. + +NLI-PT (del Río Gayo et al., 2018) is a dataset collected from three different learner corpora of Portuguese including COPLE2; Leiria corpus, and PEAPL. The three corpora contain written productions from learners of Portuguese with different proficiency levels and native languages. The dataset included all the data in COPLE2 and sections of PEAPL2 and Leiria corpus with details of the dataset in Table C.1. Therefore, the dataset include texts corresponding to the following 15 languages: Arabic, Chinese, Dutch, English, French, German, Italian, Japanese, Korean, Polish, Romanian, Russian, Swedish, Spanish, and Tetum. + +Wanca 2017 Web Corpora (Jauhiainen et al., 2020) is made up of re-crawls performed by the SUKI project. The target of the re-crawl was to download and check the availability of the then current version of the Wanca service of about 106,000 pages. This list of 106,000 http addresses was the result of several earlier web-crawls, in which they had identified the language in a total of 3,753,672,009 pages. + +EUROGOV, TCL, and WIKIPEDIA (Baldwin and Lui, 2010) consist of documents with a single encoding across 10 European languages; shorter documents across different encodings for 60 languages, and wikipedia web crawls for 67 languages respectively. These collection cover different genres with Eurogov collected from government documents, TCL from online news sources and Wikipedia dumps. + +The UMass Global English on Twitter Dataset (Blodgett et al., 2017) contains 10,502 tweets, randomly sampled from all publicly available geotagged Twitter messages, annotated for being in English, non-English, or having code switching, language ambiguity, or having been automatically generated. It includes messages sent from 130 dif + +ferent countries. + +# C.2 Features + +Different features can be used for training a LID system including: + +- Bytes and Encoding: Some encodings use a fixed number of bytes e.g ASCII while some others use variable length encoding. Some languages also use specific encodings (GuoBiao 18030 or Big5 for chinese) while the same encoding can be used for different languages (e.g UTF-8). + +- Characters: Non-alphabetic, alphabets, capitalization, the number of characters in words and word combinations, the number of characters in words and word combinations have been used as features. Non-alphabetic characters has been used to detect languages like Arabic, emojis, and other languages that use non-alphabetic characters (Samih, 2017; Bestgen, 2017; Dongen, 2017). Alphabets can also be used to exclude languages where a unique character is absent in the test document. + +- Character combination: co-occurrences of some characters can be used to detect some languages. Linguistically, some languages abhor certain combination of characters which some other languages allow. For example some Niger-Congo languages abhor vowel hiatus and every consonant must be followed by a vowel. This feature has been found useful for developing LID systems (van der Lee and van den Bosch, 2017; Dongen, 2017; Martinc et al., 2017). + +- Morphemes, Syllables andChunks: different morphological features including prefixes, suffixes, and character n-grams (Gomez et al., 2017). Syllables, chunks, and chunks of syllables / ngrams have also been used for LID. This also has linguistic significance in that the prefix, suffixes and morphological information embedded in a language can provide information about the etymology of a language. + +- Words: The position of words (Adouane and Dobnik, 2017), the string edit distance and n-gram overlap between the word to be identified and words in dictionaries, dictionary of unique words in a language, basic dictionary + +of a language, most common words, word clusters among others are some discriminating features used for LID. + +- Combination of words: Here, length of words, the ratio to the total number of words of: once-occurring words, twice-occurring words, short words, long words, function words, adjectives and adverbs, personal pronouns, and question words are some features used here (van der Lee and van den Bosch, 2017). This feature is linguistically significant since the ratio of certain categories of words can be useful for identifying some languages. +- Syntax and Part of speech (POS) tags: Syntactic features can be used to identify languages. Identifying an adjective before a noun for instance may be a good indication for some languages and even the tags available can be a useful feature. Syntactic parsers together with dictionaries and morpheme lexicons, n-grams composed of POS tags and function words have all been used as features (Adouane and Dobnik, 2017) for LID. +- Languages identified for surrounding words in word-level LID: The language of surrounding words can also be a useful feature since there may be a higher likelihood of having some languages used together. This is especially true in the case of codeswitching where some languages are more likely to be used together than some others (Dongen, 2017). +- Feature smoothing: Feature smoothing is required in order to handle the cases where not all features in a test document have been attested in the training corpora. Feature smoothing is used in low resource scenarios and when the frequency of some features are high. Different types of feature smoothing is possible. Some of them are additive smoothing where an extra number of occurrences is added to every possible feature in the language model (Jauhiainen et al., 2019). + +# C.3 Methods + +Algorithms for LID work by first using one or more features before using a classification algorithm to determine the appropriate language for a text(Grothe et al., 2008; Jauhiainen et al., 2019). + +Hidden Markov Models (HMM) Hidden Markov Models (HMM) are commonly used in spoken language identification (Zissman and Berkling, 2001; Yan and Barnard, 1995) as well as for written language (Guzman et al., 2016). Language models are first trained for each language that the system must know about using a text corpora, and stored for later comparison with unidentified text. In these models the parameters of the HMM are the transition probability and the initial probability. Probabilities are calculated using the relative frequency of each transition or initial state of the training data. After training, the system calculates the sequence probability using each language model that has been trained (Padró and Padró, 2004). + +N-Gram-Based Text Categorization This method introduced by (Cavnar and Trenkle, 1994; Grothe et al., 2008) is based on comparing unique n-gram frequency profiles. These frequencies are sorted in decreasing order for all unique n-grams. N-gram profiles are created for each language to be trained with $n = 1$ to 5. To classify a piece of text, the n-gram frequency for that text is built and compared to the n-gram profiles calculated during the training phase. This is done by computing the distance between the n-gram profiles of the text and that for each language model. The computation also penalizes the total score of the language for each missing n-gram. The language with the lowest score is selected as the identified language (Jauhiainen et al., 2017a; Padró and Padró, 2004). + +LIGA This uses a graph-based n-gram approach called LIGA which was originally used for sentiment analysis (Tromp, 2011) and adopted for LID (Vogel and Tresner-Kirsch, 2012). The language models use the relative frequencies of character trigrams and those of 4-grams. To identify the language in a text, the relative frequency of each trigram and 4-gram found in a language model is added to the score of the language. The language with the highest score is selected as the language of the text. + +HELI Method The HeLI method (Jauhiainen et al., 2017b) uses character n-grams based language models for each language. The n-gram values are hyperparameters from one to a specific maximum number $N_{\mathrm{max}}$ . The model then selects one language model when classifying the language of a text. The selection is based on the most applicable model to the specified text. The model then gradually backs off to a lower order n-gram if the n + +gram with the $N_{\mathrm{max}}$ is not applied until an n-gram can be applied. The validation set is used during evaluation to determine the best values for $N_{\mathrm{max}}$ , the maximum number of features to be included in the language models, and the penalty for languages without the selected feature. The penalty functions like a smoothing parameter by transferring some of the probability mass to unseen features in the language model (Jauhiainen et al., 2017a). + +Whatlang program This uses language models built with n-grams of variable byte lengths between 3 - 12 (Brown, 2013). The K most frequent n-grams and their relative frequencies are then extracted and calculated for each language. Once the first model is generated, substrings of larger n-grams are filtered out if the larger n-gram has a frequency not less than $62\%$ of the frequency of the shorter n-grams. The model weights are computed for each language such that shorter n-grams with the same relative frequency have lower weights than those with larger n-grams. This is because larger n-grams are more informative but less common. + +# C.4 Language Identification Tools + +Several tools have been developed for multilingual LID. We provide details of different tools which has representation for African languages including CLD2 (McCandless, 2010), CLD3 (Salcianu et al., 2018) EquiLID (Jurgens et al., 2017), fastText (Joulin et al., 2017), Franc, Langid.py (Lui and Baldwin, 2012), and LangDetect (Shuyo, 2010). + +# C.4.1 CLD2 + +CLD2 (McCandless, 2010) covers 83 languages and trained on web pages text, using one of three different token algorithms. CLD2 probabilistically detects over 86 languages including Afrikaans and Swahili. Unicode UTF-8 text, either plain text or HTML/XML. It requires that legacy encodings be converted to valid UTF-8. For mixed-language input, CLD2 returns the top three languages found and their approximate percentages of the total text bytes (e.g. $80\%$ English and $20\%$ French out of 1000 bytes of text means about 800 bytes of English and 200 bytes of French). Optionally, it also returns a vector of text spans with each language identified. + +# C.4.2 CLD3 + +CLD3 (Salcianu et al., 2018) $^{15}$ , the latest updated version of CLD2 (2020) covers 106 languages including Afrikaans, Amharic, Hausa, Malagasy, Shoma, Somali, Swahili, Xhosa, Yoruba, and Zulu. CLD3 uses a neural network model for language identification. It contains the inference code and a trained model. + +# C.4.3 EquiLID + +EquiLID (Jurgens et al., 2017) $^{16}$ is a character based DNN encoder – decoder model (Cho et al., 2014; Sutskever et al., 2014) with an attention mechanism (Bahdanau et al., 2015). Equilid is a general purpose language identification library and command line utility built to identify a broad coverage of languages, recognize language in social media, with a particular emphasis on short text, recognizing dialectic speech from a language's speakers, identify code-switched text in any language pairing at least at the phrase level, provide whole message and per-word. EquiLID covers 70 languages including Amharic. + +# C.4.4 FastText + +FastText (Joulin et al., 2016) supports 176 languages including 5 African languages. The model uses a classifier with hierarchical softmax with n-grams. + +# C.4.5 Franc + +Franc supports 403 languages including 88 African languages. It is built using Universal Declaration of Human Rights UDHR documents translated into multiple languages. Details of the model architecture is not available, however there is indication that $n$ -grams are used in the model. + +# C.4.6 LangDetect + +LangDetect (Shuyo, 2010) covers 49 languages including Afrikaans and Swahili. LangDetect uses a huge dictionary of inflections and compound words over a Naive Bayes model with character n-grams. + +# C.4.7 Langid.py + +Langid.py (Lui and Baldwin, 2012) covers 97 languages including Afrikaans, Amharic, Malagasy, Kinyarwanda, Swahili, and Zulu. The model is trained over a naive Bayes classifier with a multinomial event model using a mixture of byte n + +grams. langid.py was designed to be used off-the-shelf. It comes with an embedded model using training data drawn from 5 domains - government documents, software documentation, newswire, online encyclopedia, and an internet crawl, though no domain covers the full set of languages by itself, and some languages are present only in a single domain. Different aspects of langid.py are evaluated in different ways. For cross-lingual feature selection evaluation, each dataset is partitioned into two sets of equal sizes. The first partition is used for training a classifier while the second is used for evaluation. Since each dataset covers a different set of languages, there may be languages in the evaluation dataset that are not present in the training dataset (Lui and Baldwin, 2011). The langid.py module on the other hand is evaluated on different datasets and the accuracy is compared with those for CLD, Textcat, and LangDetect. The accuracy of Langid.py exceeded those from other tools on two twitter datasets (Lui and Baldwin, 2012). Langid.py can be used as a command line tool, python library, or web service tool. + +
LID ToolAfrican Languages
CLD2afr, lug, kin, swa
CLD3afr, amh, hau, ibo, mlg, nya, sna, som, sot, swa, xho, yor, zul
Langid.pyafr, amh, kin, mlg, swa, xho, zul
EquiLIDamh
LangDetectafr, swh
FastTextafr, amh, mlg, som, swh, yor
+ +Table C.2: African languages represented in different LID tools. + +Other LID tools without representation of African languages include LDIG, and Microsoft LID-tool (Gella et al., 2013, 2014) which is a word level language identification tool for identifying code-mixed text of languages (like Hindi etc.) written in roman script and mixed with English. + +# D Twitter Analysis + +For the Twitter in the wild analysis, we ask for annotations of yes, no or mixed on each tweet, where yes indicates agreement with the predicted label, no indicates disagreement, and mixed indicates that the tweet contains one or more other language than the predicted. We also ask for further annotations if the tweet is not in the predicted language, or is mixed with another/other language(s). In these + +cases, respondents are asked to identify the correct language (or mixed language[s]) if they know the language(s). We provide example annotation in the wild analysis in Table D.1. + +# E Languages Covered in AfroLID + +AfroLID supports 517 African languages and language varieties. We show a large map indicating the countries and languages represented in Figure E.1. Figure E.2 and E.3 show the number of languages covered in each country and the language family information for the languages. We also show the languages and language codes in Table E.1, E.2, and E.3. + +grams. langid.py was designed to be used off-line-shelf. It comes with an embedded model using training data drawn from 5 domains - government documents, software documentation, newswire, online encyclopedia, and an internet crawl, though the domain covers the full set of languages by itself, and some languages are present only in a single domain. Different aspects of langid.py are evaluated in different ways. For cross-lingual feature selection evaluation, each dataset is partitioned into two sets of equal sizes. The first partition is used for training a classifier while the second is used for evaluation. Since each dataset covers a different set of languages, there may be languages in the evaluation dataset that are not present in the training dataset (Lui and Baldwin, 2011). The langid.py module on the other hand is evaluated on different datasets and the accuracy is compared with those for CLD, Textcat, and LangDetect. The accuracy of Langid.py exceeded those from other tools on two twitter datasets (Lui and Baldwin, 2012). Langid.py can be used as a command line tool, python library, or web service tool. + +
aarbezcouezaifekhylemmfingarifsscuth
abnbfacskfiaigbkialikmgcngbrimsukvag
adabfddaafipigekiklipmgongnrubsusvif
adjbfodafflriglkkjlmdmgqnhrruntaqvun
afrbibdgafonijnklulmpmklnhurwktcdvut
agqbivdgigaaikkkmblnlmlrnimsagtemwbi
akpbjvdhmgboikwknflogmnfninsbatexwib
annbkydibgidiqwkoqlolmnkniqsbdtgwwmw
anubmodidgizirikqplommosniysbpthkxed
anvbmvdikgkpiso3kqsloqmoznkosefthvxpe
asgbomdipgnaizrkrslotmpgnlasestivxrb
atgbovdnjgndizzkrwlozmqbnnhsevtljxsm
avnboxdowgngjgokrxlromuannwsfwtodxtc
avubqcdshgoljibksblucmuhnseshitogxuo
azobqjduggqrkamksflwomuynsoshjtswyam
bavbscdyigsokbnkspmafmwmnusshkttqyao
bbabssebrgurkbokssmbumwsnybsigtryat
bbjbudebuguwkbpkubmcpmybnyysiltuiyba
bbkbumefiguxkcgkujmcumyknzasnftulyor
bcibusegogvlkdekyqmdamzmodusnwtumzga
bcpbuyekagyakdekzrmdmmzwokrsoptvuzne
bcybzaetuhnakdhlammeqnaqokusorudu
bdhbzwetxibbkdIlapmerncuozmsotumb
bdsckoeweibokenleemevndvpkbsoyurh
bexcmeewoidukerlefmfhndzpkospputh
+ +Table C.3: Language varieties that use diacritics in our training data. + +
ISO-3TweetRepresentative?NoMixed
yorDon't be on my TL supporting a rapist, a o ní s'oriburubuku oMixedEnglish
USER Omo ilorin Nile Adeleke ti BinuYes
Oproblema opo openi neNoUnknown
USER On top Iron Konji na BastardNoNigerian Pidgin
iboUSER Mana ima na ife any i na-ekwu bu eziokwuYes
USER Mo je ri eNoYorùbá
USER Hamna namna mzeeNoUnknown
hauUSER Kaji dadinka brother ka hutaMixedEnglish
USER Su Umar danbaradeYes
USER Good nkosazana CathyNoEnglish + unknown
ovo ra mbuti USER Sesi Gladys maniNoUnknown
pcmUSER Gompiano o bone dust !MixedUnknown
USER Wey I travel from Ilesa to IpetumoduYes
USER Ende zwotoralo ngoho ngohoNoUnknown
Despacito! beyaudkrnkwudh despacito, daueiejrb despacitoo! goose bumpsNoEnglish + unknown
+ +Table D.1: Some example annotations for the Twitter in the wild analysis. We show for each language the 4 possible annotations. + +![](images/151879a41184c20df53ceb904eee36c73fe5fe216e5f29be911be072c3fd957d.jpg) +Figure E.1: All 50 African countries in our data, with our 517 languages/language varieties in colored circles overlayed within respective countries. + +![](images/3b162b3b9c69d679b592b81ccc1c21b79d3b30c76c24648bd919b5889ba2bdc3.jpg) +Figure E.2: AfroLID's Covered languages. + +![](images/324f305b52cd4433567ae7b51726f8364734332bf18f0616a7928f77fdd729da.jpg) +Figure E.3: Percentage of languages per family on training dataset. + +
ISO-3LanguageISO-3LanguageISO-3LanguageISO-3Language
aarAfar / QafarbkyBokyidowDoyayogolGola
abaAbe / AbbeybmoBambalangdshDaasanachgqrGor
abnAbuabmvBumduaDoualagsoGbaya, Southwest
acdGikyodebomBeromdugChidurumagudDida, Yocoboue
achAcholibovTuwulidwrDawrogurFarefare
adaDangmeboxBwamu / BuamudyiSénéfu, DjiminiguwGun
adhJopadhola / AdholabqcBokodyuJulaguxGourmanchema
adjAdjukru / AdioukroubqjBandialebrEbrieguzEkegusii
afrAfrikaansbscOniyanebuKiembu / EmbugvlGulay
agqAghembspBaga SitemuefiEfikgwrGwere
ahaAhantabssAkooseegoEggongyaGbaya, Northwest
ajgAjabstBasketoekaEkajukhagHanga
akpSiwubudNtchamekoKotiharHarari
alzAlurbumBuluetoEtonhauHausa
amhAmharicbunSherbroetuEjaghamhayHaya
annObolobusBokobaruetxIten / EtenhbbNya huba
anuAnyuak / AnuakbuyBullom SoeweEwehehHehe
anvDenyabwrBura PabirewoEwondoherHerero
asaAsubwuBulifakFanghgmHaillom
asgCishinginibxkBukusufatFantehnaMina
atgIvbie North-Okpela-ArhebyfBeteffmFulfulde, MaasinaibbIbibio
atiAttiebyvMedumbafiaNobiiniboIgbo
avnAvatimebzaBandifipFipaiduIdoma
avuAvokayabzwBasaflrFuliiruigbEbira
azoAwingceeChopifonFonigeIgede
bamBambarachwChuabofubFulfulde, AdamawaiglIgala
bavVengocjkChokwefueFulfulde, BorguijnKalabari
bbaBaatonumckoAnufofufPularikkIka
bbjGhomalacmeCermafuhFulfulde, Western NigerikwIkwere
bbkBabankicopCopticfulFulahiqwIkwo
bciBaulecouWameyfuqFulfulde Central Eastern NigeririRigwe
bcnBalicrsSeychelles CreolefvuFulfude NigeriaishEsan
bcwBanacskJola KasagaaGaisoIsoko
bcyBacamacweKweregaxOromo, Borana-Arsi-Gujiiyxyaka
bdhBakadaaDangaleatgazOromo, West CentralizrIzere
bdsBurungedagDagbanigboGrebo, NorthernizzIzii
bemBemba / ChibembadavDawida / TaitagbrGbagyijgoNgomba
beqBeembedgaDagaaregdeGudejibJibu
berBerberdgdDagaari DioulagidGidarjitJita
bexJur MododgiDagara, NortherngizSouth GizigajmcMachame
bezBenadhmDhimbagjnGonjakabKabyle
bfaBaridibDinka, South CentralgknGokanakamKikamba
bfdBafutdidDidingagkpKpelle, GuineakbnKare
bfoBirifor, MalbadigChidigogmvGamokboKeliko
bibBisadikDinka, SouthwesterngnaKaansakbpKabiye
bimBimobadipDinka, NortheasterngndZulgo-gemzekkbyKanuri, Manga
binEdodiuGcirikugngNgangamkcgTyap
bivBirifor, SoutherndksDinka, SoutheasterngofGoofakckKalanga
bjvBedjonddnjDangogGogokdcKutu
+ +Table E.1: AfroLID covered Languages - Part I. + +
ISO-3LanguageISO-3LanguageISO-3LanguageISO-3Language
kdeMakondelajLangomfhMatalngbNgbandi, Northern
kdhTemlamLambamfiWandalangcNgombe
kdiKumamlapLakamfkMofu, NorthnglLomwe
kdjNg'akarimojongleeLyélémfqMobangnBassa
kdITsikimbalefLelemimfzMabaanngoNgoni
kdnKundalemNomaandemgcMorokodongpNgulu
keaKabuverdianulggLugbaramghMakhuwa-MeettonhrNaro
kenKenyanglgmLega-mwengamgoMeta'nhuNoone
khyKele / LokeleliaLimba, West-CentralmgqMalilanihNyiha
kiaKimlikLikamgrMambwe-LungunimNilamba / kinilyamba
kikGikuyu / KikuyulinLingalamgwMatumbininNinzo
kinKinyarwandalipSekpelemifMofu-GudurniyNgiti
kizKisilmdLumunmklMokolenkaNkoya / ShiNkoya
kkiKagululmpLimbummlgMalagasynkoNkonya
kkjKakolnlBanda, South CentralmlrVamenlaNgombale
klnKalenjinlogLogommyMigaamannbNande / Ndandi
kluKlaolomLomamnfMundaninnhNgiemboon
kmaKonniloqLobalamnkMandinkanqNgindo
kmbKimbundulotLatukamoaMwannseChinsenga
kmyKomalozSilozimosMoorennwNuni, Southern
knfMankanyalroLaromoyShekkachonsoSepedi
kngKongoismSaamy-Gwe / SaamiamozMukuluntrDelo
knkKurankolthThur / Acholi-LabwormpeMajangnujNyole
knoKonoltoTsotsompgMarbanusNuer
kooKonzoluaTshilubamqbMbukonwbNyabwa
koqKotalucAringamscManinka, SankaranxndNgando
kqnKikaondelueLuvalemurMurlenyaChichewa
kqpKimrélugLugandamuyMuyangnybNyangbo
kqsKisilunLundamweMweranydOlunyole / Nyore
kqyKooreteluoDholuo / LuomwmSarnyfGiryama
kriKriolwgWangamwnCinamwanganykNyaneka
krsGbayalwoLuwomwsMwimbi-MuthambinymNyamwezi
krwKrahn, WesternmafMafamybMbayynnNyankore / Nyankole
krxKaronmasMaasaimykSénéufo, MamaranyoNyoro
ksbShambala / KishambalamawMamprulimyxMasaabanyuNyungwe
ksfBafiambuMbula-BwazzamzmMumuyenyyNyakyusa-Ngonde / Kyangonde
kspKabbamckMbundamzwDegnzaMbambe, Tigon
ktjKrumen, PlapomcnMasana / MassananaqKhoekhoenziNzema
ktuKikongomcpMakaanawNawurioduOdual
kuaOshihambomcuMambila, CameroonnbaNyembaogoKhana
kubKutepmdaMadanblIsiNdebeleokeOkpe
kujKuriamdmMayogoncuChunburungokrKirike
kusKusaalmdyMaalendcNdauokuOku
kvjPsikyemenMendendeIsiNdebeleormOromo
kwnKwangalimeqMereyndhNdaliozmKoonzime
kyfKouyamerKimirundjNdambapcmNigerian Pidgin
kyqKengamevMaan / MannndoNdongapermKipende
kzrKarangmfeMorisyen / Mauritian CreolendvNdutpkbKipfokomo / Pokomo
laiLambyamfgMogofinndzNdogo
+ +Table E.2: AfroLID covered Languages - Part II + +
ISO-3LanguageISO-3LanguageISO-3Language
povGuinea-Bissau CreoletcdTafiwonWongo
poyPogolo / Shipogoro-PogolotedKrumen, TepoxanXamtanga
ragLulogoolitemTimnexedHdi
relRendilleteoTesoxhoIsixhosa
rifTarifittexTennetxnzMattokki
rimNyaturutgwSenoufo, TagwanaxogSoga
rndUruundthkTharakaxonKonkomba
rngRonga / ShiRongathvTamahaq, TahaggartxpeKpelle
rubGungutirTigrinyaxrbKaraboro, Eastern
runRundi / KirunditivTivxsmKasem
rwkRwatkeTakwanextcKatcha-Kadugli-Miri
sagSangotljTalinga-BwisixuoKuo
saqSamburutllOetelayalYalunka
sbaNgambaytogTongayamYamba
sbdSamo, SoutherntohGitongayaoYao / Chiyao
sbpSangutoiChitongayatYambeta
sbsKuhanetpmTampulmaybaYala
sbySolitscTshwaybbYemba
sefSénoufo, CebaaratsnSetswanayomIbinda
sesSonghay, Koyraboro SennitsoTsongayorYoruba
sevSénoufo, NyarafolotswTsishinginiyreYaoure
sfwSehwittjToro / RutorozajZaramo
sgwSebat Bet GuragettqTawallammatzdjComorian, Ngazidja
shiTachelhitttrNyimatlizgaKinga
shjShatttuiToupouriziwZigula
shkShilluktulKutulezneZande / paZande
sidSidamatumChitumbukazulIsizulu
sigPaasaaltuvTurkana
silSisaala, TumulungtvuTunen
snaShonatwiTwi
snfNoonumbUmbundu
sngSanga / KilubaurhUrhobo
snwSeleeuthut-Hun
somSomalivagVagla
sopKisongevaiVai
sorSomraivenTshivenda
sotSesothovidChividunda
soyMiyobevifVili
sppSenoufo, SupyirevmkMakhuwa-Shirima
sswSiswativmwMacua
sukSukumavunKivunjo
susSosoxuivutVute
swaSwahiliwalWolayta
swcSwahili CongowbiVwanji
swhSwahiliwecGuere
swkSena, MalawiwesPidgin, Cameroon
sxbSubawibToussian, Southern
taqTamasheqwmwMwani
tccDatoogawolWolof
+ +Table E.3: AfroLID covered Languages - Part III. \ No newline at end of file diff --git a/afrolidaneurallanguageidentificationtoolforafricanlanguages/images.zip b/afrolidaneurallanguageidentificationtoolforafricanlanguages/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..79089a90a8d03b00a04b5a9bca9f4352bf648f16 --- /dev/null +++ b/afrolidaneurallanguageidentificationtoolforafricanlanguages/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:49ce834cb61c5ba574ae6385230cb619d51d152ee9502e2be9171b92f88ea1e4 +size 1872403 diff --git a/afrolidaneurallanguageidentificationtoolforafricanlanguages/layout.json b/afrolidaneurallanguageidentificationtoolforafricanlanguages/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..bdd5adb62770ff4a869534c2c98873bf3d796225 --- /dev/null +++ b/afrolidaneurallanguageidentificationtoolforafricanlanguages/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c1e445f395fc506641f1b552a9b1b984d8c59e63d1407027391a9dc9c3f883df +size 574561 diff --git a/agenerativemodelforendtoendargumentminingwithreconstructedpositionalencodingandconstrainedpointermechanism/ade45168-067d-4a36-bc81-86718d012c4f_content_list.json b/agenerativemodelforendtoendargumentminingwithreconstructedpositionalencodingandconstrainedpointermechanism/ade45168-067d-4a36-bc81-86718d012c4f_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..f11de95dd00487fa106b5b23424c26863f9ca900 --- /dev/null +++ b/agenerativemodelforendtoendargumentminingwithreconstructedpositionalencodingandconstrainedpointermechanism/ade45168-067d-4a36-bc81-86718d012c4f_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0eab5c58410046da802a3494485255dd03e1bf861ebf2da2ac4e684fd7ebffdc +size 93805 diff --git a/agenerativemodelforendtoendargumentminingwithreconstructedpositionalencodingandconstrainedpointermechanism/ade45168-067d-4a36-bc81-86718d012c4f_model.json b/agenerativemodelforendtoendargumentminingwithreconstructedpositionalencodingandconstrainedpointermechanism/ade45168-067d-4a36-bc81-86718d012c4f_model.json new file mode 100644 index 0000000000000000000000000000000000000000..09cbb28454de41df7d38c14d94eb8eb2631fba73 --- /dev/null +++ b/agenerativemodelforendtoendargumentminingwithreconstructedpositionalencodingandconstrainedpointermechanism/ade45168-067d-4a36-bc81-86718d012c4f_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7c11ecdcc1b738b463905f7cba8781662fa7b6641c5612cb61010597f2f2a04a +size 117639 diff --git a/agenerativemodelforendtoendargumentminingwithreconstructedpositionalencodingandconstrainedpointermechanism/ade45168-067d-4a36-bc81-86718d012c4f_origin.pdf b/agenerativemodelforendtoendargumentminingwithreconstructedpositionalencodingandconstrainedpointermechanism/ade45168-067d-4a36-bc81-86718d012c4f_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ec6ea3624aeadf08170a336c78cb5aa15e8a8cf6 --- /dev/null +++ b/agenerativemodelforendtoendargumentminingwithreconstructedpositionalencodingandconstrainedpointermechanism/ade45168-067d-4a36-bc81-86718d012c4f_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:442c583e48f6715fd868c5d4bbd734d166b14f0bb60bc07b5e8e36625beac3d4 +size 479175 diff --git a/agenerativemodelforendtoendargumentminingwithreconstructedpositionalencodingandconstrainedpointermechanism/full.md b/agenerativemodelforendtoendargumentminingwithreconstructedpositionalencodingandconstrainedpointermechanism/full.md new file mode 100644 index 0000000000000000000000000000000000000000..ad3efe7d27b83a1bf8b5557ab4259c5164e62d14 --- /dev/null +++ b/agenerativemodelforendtoendargumentminingwithreconstructedpositionalencodingandconstrainedpointermechanism/full.md @@ -0,0 +1,368 @@ +# A Generative Model for End-to-End Argument Mining with Reconstructed Positional Encoding and Constrained Pointer Mechanism + +Jianzhu Bao $^{1,2*}$ , Yuhang He $^{1,2*}$ , Yang Sun $^{1,2}$ , Bin Liang $^{1,2\dagger}$ , Jiachen Du $^{1,2}$ , Bing Qin $^{1}$ , Min Yang $^{3}$ , Ruifeng Xu $^{1,2,4\dagger}$ + +1Harbin Institute of Technology, Shenzhen, China + +$^{2}$ Guangdong Provincial Key Laboratory of Novel Security Intelligence Technologies + +3Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences + +$^{4}$ Peng Cheng Laboratory, Shenzhen, China + +jianzhubao@gmail.com, yuhang.he_hitssz@outlook.com, sy95@mail.ustc.edu.cn + +bin.liang@stu.hit.edu.cn, jacobvan199165@gmail.com + +qinb@ir.hit.edu.cn, min.yang@siat.ac.cn, xuruifeng@hit.edu.cn + +# Abstract + +Argument mining (AM) is a challenging task as it requires recognizing the complex argumentation structures involving multiple subtasks. To handle all subtasks of AM in an end-to-end fashion, previous works generally transform AM into a dependency parsing task. However, such methods largely require complex pre- and post-processing to realize the task transformation. In this paper, we investigate the end-to-end AM task from a novel perspective by proposing a generative framework, in which the expected outputs of AM are framed as a simple target sequence. Then, we employ a pretrained sequence-to-sequence language model with a constrained pointer mechanism (CPM) to model the clues for all the subtasks of AM in the light of the target sequence. Furthermore, we devise a reconstructed positional encoding (RPE) to alleviate the order biases induced by the autoregressive generation paradigm. Experimental results show that our proposed framework achieves new state-of-the-art performance on two AM benchmarks. $^{1}$ + +# 1 Introduction + +As a fundamental task of computational argumentation, argument mining (AM) has drawn much research attention recently (Schaefer and Stede, 2021; Vecchi et al., 2021; Lawrence and Reed, 2019). The ultimate goal of AM is to analyze and understand argumentative text, so as to obtain structured argumentation knowledge that can support a diverse range of downstream tasks, such as argument persuasiveness prediction (Li et al., 2020; Huang et al., 2021), automated essay scoring (Ghosh et al., 2016; Nguyen and Litman, 2018; Song et al., 2020), argument generation (Hua et al., 2019; Slonim et al., + +![](images/40c0cada6e1799d7fb266b1fc7c0f2e39038361fad09fb4cd84be121bb89a48e.jpg) +Target Sequence: +[3,5,Claim,10,14,Premise,Support] +Figure 1: A simplified example of AM task. Two argument components are marked in green and blue, respectively, where the former is a Claim and the latter is a Premise. In addition, there is a Support relation from the Premise to the Claim. Our proposed target sequence corresponding to this example is shown at the bottom. + +2021; Khatib et al., 2021), text summarization (Fab-bri et al., 2021; Bar-Haim et al., 2020), etc. + +Given a piece of argumentative text as input, an end-to-end AM system needs to identify both the argument components (ACs) and the argumentative relations (ARs) between them. An example is shown in Figure 1. Specifically, AM generally comprises four fine-grained subtasks (Eger et al., 2017): 1) component segmentation detects the boundaries of fine-grained argumentative segments, which are known as ACs; 2) component classification classifies the ACs into the categories defined by argumentation schemes; 3) relation detection determines whether there is an AR between two ACs; 4) relation classification further classifies the types of the ARs. Following Persing and Ng (2016) and Ye and Teufel (2021), we refer the first two subtasks as argument component identification (ACI), and the last two subtasks as argumentative relation identification (ARI). The end-to-end AM task is highly challenging as it is difficult to solve all the AM subtasks synchronously in a unified framework. + +Most previous work focuses on only a subset of the four fine-grained subtasks (Niculae et al., 2017; Reimers et al., 2019; Jo et al., 2019; Morio et al., 2020; Lenz et al., 2020; Ruiz-Dolz et al., 2021; Bao et al., 2021). However, only a limited number of studies are devoted to the end-to-end AM scenario (Persing and Ng, 2016; Eger et al., 2017; Ye and Teufel, 2021). Recent research efforts formulate the end-to-end AM as a dependency parsing task and apply existing dependency parsers to solve it (Ye and Teufel, 2021; Dozat and Manning, 2018). Such methods, however, require not only a tedious preprocessing process to transform the argumentation structure into an elaborately-designed dependency graph, but also a complex post-processing process to ensure that each dependency of the predicted output is completely consistent with that in the desired dependency graph of the AM task. Thus, developing an elegant, simple, and effective framework for the end-to-end AM task is still an important challenge of great significance. + +Inspired by the recent success of the generative methods for information extraction (Yan et al., 2021b; Zhang et al., 2022), we propose to address the end-to-end AM task via a unified generative framework. We first devise a target sequence to express the outputs of all the subtasks of AM. An example is shown in Figure 1. Subsequently, the pre-trained BART (Lewis et al., 2020) model is adopted to model the dependencies between the target sequence and the input argumentative text through the pointer mechanism. Further, we introduce a reconstructed positional encoding (RPE) scheme in the BART decoder to alleviate the order biases induced by the autoregressive generation paradigm. In addition, considering the long length of the ACs and the pattern of the target sequence, we present a constrained pointer mechanism (CPM), which is manifested as an auxiliary task at the training time and as a constrained decoding method at the inference time. This constrained pointer mechanism can help the model to generate more accurate AC boundaries and fewer invalid target sequences. Compared to the previous dependency parsing-based method, it is more straightforward and easier to formalize the end-to-end AM task into a generation task. Also, in our proposed method, the predicted target sequence can be easily converted to the expected outputs of AM without complex post-processing. + +We conduct extensive experiments on two AM + +benchmarks of different structures to show the superiority of our method. Experimental results show that our proposed method achieves substantial improvements over several strong baselines, yielding the state-of-the-art performance on both benchmark datasets. In addition, we carry out further analysis to show that the proposed RPE and CPM can significantly reduce the errors in the generated target sequence, thus leading to performance improvements. + +# 2 Related Work + +# 2.1 Argument Mining + +AM traditionally involves four fine-grained subtasks. Early work usually exclusively studies a particular subtask, such as component segmentation (Moens et al., 2007; Florou et al., 2013; Goudas et al., 2014), component classification (Palau and Moens, 2009; Stab and Gurevych, 2014; Lippi and Torroni, 2015; Nguyen and Litman, 2015), relation detection (Palau and Moens, 2009; Stab and Gurevych, 2014), and relation classification (Ghosh et al., 2014; Boltuzic and Snajder, 2014; Peldszus, 2014; Cocarascu and Toni, 2017). + +Recently, there has been a trend to study the joint modeling of multiple subtasks of AM. However, most work only addresses a subset of the four fine-grained subtasks of AM, instead of performing an end-to-end approach. For joint modeling component segmentation and component classification, Chernodub et al. (2019) built a neural sequence labeling model, while Wang et al. (2020) proposed a multi-scale model to recognize different types of ACs at corresponding levels. Since component segmentation is a token-level task while other three subtasks are at segment-level (Ye and Teufel, 2021), it is difficult to model them jointly. Thus, some previous studies ignore the component segmentation task and only jointly model the other three subtasks. Kuribayashi et al. (2019) explored the application of span representation in AM. Morio et al. (2020) incorporated task-specific parameterization and bi-affine attention for improving non-tree AM. Many other works further ignore the relation classification task. Potash et al. (2017) employed a pointer network with the attention mechanism for structural prediction. Niculae et al. (2017) presented a factor graph model to impose structure constraints. Bao et al. (2021) proposed a transition-based neural network to construct argumentation graphs. + +Compared to the studies above, there have been + +relatively fewer researches on end-to-end AM. Persing and Ng (2016) performed joint inference in an Integer Linear Programming (ILP) framework. Eger et al. (2017) formalized the end-to-end AM task into multiple other tasks, including sequence labeling, end-to-end relation extraction, and dependency parsing. However, sequence labeling-based methods try to predict the distance between ACs by token-level classification, which is hard to learn and the results are suboptimal. The relation extraction-based models solve ACI and ARI sequentially but may result in poor performance due to error propagation. Although Ye and Teufel (2021) further extended the dependency parsing approach of Eger et al. (2017) and achieved promising performance, it requires tedious pre- and post-processing (e.g. label refinement, removing invalid or multiple edges). + +# 2.2 Generative Methods for IE + +With superior development of pre-training techniques, there has been a rising trend of adopting generative models to solve information extraction (IE) tasks, such as aspect-based sentiment analysis (Zhang et al., 2021b,a), named entity recognition (Ren et al., 2021; Cui et al., 2021; Zhang et al., 2022), event argument extraction (Li et al., 2021), etc. + +Closely related to our work, some recent studies incorporate the pre-trained generative models with the pointer mechanism to better address IE tasks. Yan et al. (2021a) formalized all the subtasks of aspect-based sentiment analysis into generation tasks, and employed the pre-trained BART (Lewis et al., 2020) with the pointer mechanism to address them in a unified framework. Similarly, Yan et al. (2021b) explored solving multiple NER subtasks with pre-trained BART. + +# 2.3 Pointer Mechanism + +Pointer mechanism (Vinyals et al., 2015) aims to solve the problem of generating an output sequence that contains elements from the input sequence, which is usually based on a sequence-to-sequence model with the attention mechanism (Bahdanau et al., 2015). It has been applied to various tasks including dependency parsing (Ma et al., 2018; Liu et al., 2019; Fernandez-Gonzalez and Gomez-Rodriguez, 2020), named entity recognition (Yan et al., 2021b; Fei et al., 2021; Yang and Tu, 2022), text summarization (Miao and Blunsom, 2016; See et al., 2017; Paulus et al., 2018), etc. + +In this paper, we modify the traditional pointer mechanism by imposing task-specific constraints to make it more suitable for the generative model for the end-to-end AM. + +# 3 Task Formulation + +Formally, for the end-to-end AM, the input is a piece of argumentative text $X = [w_{1}, w_{2}, \ldots, w_{n_{x}}]$ with $n_{x}$ tokens. The first goal is to extract a set of ACs $A = \{a_{i} | a_{i} = (s_{i}, e_{i}, c_{i})\}_{i=1}^{|A|}$ , where $a_{i}$ is the $i$ -th AC, $s_{i}$ and $e_{i}$ respectively denote its start and end indexes, $c_{i}$ represents its category label, such as "Claim", "Premise", etc. The second goal is to output a set of ARs $R = \{(a_{i}^{sc}, a_{i}^{tc}, r_{i})\}_{i=1}^{|R|}$ , where $a_{i}^{sc} \in A$ and $a_{i}^{tc} \in A$ denote the source and target ACs, $r_{i}$ is the AR category label, such as "Support", "Attack", etc. Here, we denote the AC and AR category label lists as $L^{c} = [l_{1}^{c}, l_{2}^{c}, \ldots, l_{n_{c}}^{c}]$ and $L^{r} = [l_{1}^{r}, l_{2}^{r}, \ldots, l_{n_{r}}^{r}]$ , where $l_{i}^{c} / l_{i}^{r}$ is the $i$ -th AC/AR category label, $n_{c} / n_{r}$ is the number of all the possible AC/AR category labels. + +To solve the end-to-end AM through a generative framework, we need to formulate it as a sequence-to-sequence generation task with $X$ as the input source sequence. Also, the expected AM outputs $A$ and $R$ are transformed as the target sequence $Y = [T_{1}, T_{2}, \ldots, T_{|R|}]$ , where the tuple $T_{i} = [s_{i}^{tc}, e_{i}^{tc}, c_{i}^{tc}, s_{i}^{sc}, e_{i}^{sc}, c_{i}^{sc}, r_{i}]$ represents the $i$ -th AR in $R$ . For the $i$ -th AR, $s_{i}^{sc}/e_{i}^{sc}$ and $s_{i}^{tc}/e_{i}^{tc}$ respectively denote the start/end indexes of the source and the target ACs, $c_{i}^{sc}$ and $c_{i}^{tc}$ are their AC category labels. + +Example. In Figure 1, there is only one AR, so the target sequence is $Y = [T_1] = [3,5,Claim,10,14,Premise,Support]$ . Also, the AC and AR category label lists for this example are $L^{c} = [MajorClaim,Claim,Premise]$ and $L^{r} = [Support,Attack]$ , respectively. + +# 4 Method + +Inspired by Yan et al. (2021a,b), we utilize a BART-based generative framework as our basic model, which takes $X$ as input and generates the target sequence $Y$ with the vanilla pointer mechanism. Since ACs are much longer and have more ambiguous boundaries than named entities (Yan et al., 2021b) or aspect term (Yan et al., 2021a), it is more challenging to solve the end-to-end AM task by a generative framework. Hence, we introduce a + +constrained pointer mechanism (CPM) to help the model generate more accurate AC boundaries and fewer invalid predictions. Further, we propose a reconstructed positional encoding (RPE) to alleviate the order biases introduced by the autoregressive paradigm in the basic model. + +# 4.1 Basic Model + +First, we feed $X$ into the BART encoder to derive the hidden representations of the source sequence: + +$$ +\mathbf {H} ^ {e} = \mathrm {B A R T \_ E n c o d e r} (X) \qquad (1) +$$ + +where $\mathbf{H}^e\in \mathbb{R}^{n_x\times d}$ , and $d$ is the hidden dimension of BART. + +Then, the BART decoder incorporates $\mathbf{H}^e$ and the previous decoder outputs $Y_{< t}$ to predict the current output. The hidden state of the last decoder layer at the time step $t$ is: + +$$ +\mathbf {h} _ {t} ^ {d} = \operatorname {B A R T \_ D e c o d e r} \left(\mathbf {H} ^ {e}, Y _ {< t}\right) \tag {2} +$$ + +where $\mathbf{h}_t^d\in \mathbb{R}^d$ . Note that, during this procedure, each start/end index (i.e. $s_i^{sc}$ $e_i^{sc}$ $s_i^{tc}$ or $e_i^{tc}$ ) in $Y_{< t}$ needs to be mapped to its corresponding token in $X$ first. + +Vanilla Pointer Mechanism At time step $t$ , the vanilla pointer mechanism selects tokens from the input $X$ through a pointer distribution $\bar{\mathbf{P}}_t \in \mathbb{R}^{n_x}$ over all the positions of $X$ . In this way, the start/end indexes in the target sequence $Y$ can be generated. However, we also expect the model to generate AC and AR category labels. Hence, we expand the pointer distribution $\bar{\mathbf{P}}_t$ as $\mathbf{P}_t \in \mathbb{R}^{n_x + n_c + n_r}$ by combining $\bar{\mathbf{P}}_t$ with the distributions over all the possible AC and AR category labels. + +More precisely, by feeding $X$ , $L^c$ and $L^r$ into the embedding layer of BART, we could obtain the token embedding matrix $\mathbf{E} \in \mathbb{R}^{n_x \times d}$ , the AC category embedding matrix $\mathbf{L}^c \in \mathbb{R}^{n_c \times d}$ and the AR category embedding matrix $\mathbf{L}^r \in \mathbb{R}^{n_r \times d}$ . + +Following Yan et al. (2021a), the encoder output matrix $\mathbf{H}^e$ is combined with $\mathbf{E}$ to produce the representation matrix for the pointer mechanism: + +$$ +\bar {\mathbf {H}} ^ {e} = \alpha (\mathrm {M L P} _ {m} (\mathbf {H} ^ {e})) + (1 - \alpha) \mathbf {E} \quad (3) +$$ + +where $\alpha$ is a hyper-parameter, $\mathrm{MLP}_m$ is a multi-layer perceptron. Subsequently, the expanded pointer distribution $\mathbf{P}_t$ at the time step $t$ can be derived by: + +$$ +\mathbf {H} ^ {p} = \left[ \bar {\mathbf {H}} ^ {e}; \mathbf {L} ^ {c}; \mathbf {L} ^ {r} \right] \tag {4} +$$ + +$$ +\mathbf {P} _ {t} = \operatorname {S o f t m a x} \left(\mathbf {H} ^ {p} \mathbf {h} _ {t} ^ {d}\right) \tag {5} +$$ + +where $\mathbf{P}_t\in \mathbb{R}^{n_x + n_c + n_r}$ , and; denotes the matrix concatenation operation in the first dimension. With $\mathbf{P}_t$ , the probability $P(Y_{t}|Y_{< t},X)$ of generating the $i$ -th element of the target sequence $Y$ can be obtained. + +Finally, this model is optimized with the negative log-likelihood loss: + +$$ +\mathcal {L} _ {b} = - \sum_ {t = 1} ^ {| Y |} \log P \left(Y _ {t} \mid Y _ {< t}, X\right) \tag {6} +$$ + +# 4.2 Constrained Pointer Mechanism + +We refer to each position in $\mathbf{P}_t\in \mathbb{R}^{n_x + n_c + n_r}$ as a pointer index. The pointer indexes in range $I^x = [1,n_x]$ are token indexes, while the pointer indexes in range $I^c = [n_x + 1,n_x + n_c]$ and $I^r = [n_x + n_c + 1,n_x + n_c + n_r]$ are the AC category indexes and AR category indexes, respectively. + +In each decoding time step, the vanilla pointer mechanism selects an index directly based on $\mathbf{P}_t$ , which is not reasonable because the valid pointer indexes for each time step are not identical. To be specific, regarding the $i$ -th tuple $T_i = [s_i^{tc}, e_i^{tc}, c_i^{tc}, s_i^{sc}, e_i^{sc}, c_i^{sc}, r_i]$ in the target sequence $Y$ , when predicting the target AC's end index $e_i^{tc}$ from its pointer distribution, all the pointer indexes less than the decoded start index $s_i^{tc}$ are invalid, since the end index must be greater than the start index. Also, the pointer indexes in range $I^c$ and $I^r$ are also invalid, since $e_i^{sc}$ must be a token index within range $I^x$ . + +To address this issue, we define the following three constraints: + +(1) When decoding an end index, it should be greater than its corresponding start index. +(2) When decoding the start and end index of a source AC, they can not overlap with the target AC. +(3) The valid pointer indexes must be consistent with the type of the expected output for the current time step. For example, when decoding the AR category label $r_i$ , only the pointer indexes in range $I^r$ (i.e. AR category indexes) are valid. + +To introduce these constraints to the basic model, we further define a proxy distribution $\mathbf{Q}_t\in$ $\mathbb{R}^{n_x + n_c + n_r}$ for the decoding time step $t$ to simulate the real pointer distribution with constraints, where the values of the valid and invalid pointer + +indexes are set to 1 and 0, respectively. To illustrate, for the example shown in Figure 1, the proxy distribution of each element in the target sequence is shown in Table 1. + +With the proxy distribution $\mathbf{Q}_t$ , we can easily incorporate the aforementioned three constraints into both the training and inference stages. Note that, during training, $\mathbf{Q}_t$ is constructed from the ground truth target sequence. During inference, it is constructed from the generated sequence in previous time steps. + +Incorporating Constraints in Training. During training, we regard $\mathbf{Q}_t$ as the labels of an auxiliary binary classification task to steer the model in a multi-task learning manner. This auxiliary task can provide the model with supervision signals of whether each pointer index is valid, making it less likely to predict invalid results. + +Concretely, we first calculate the probabilities of the binary classification task: + +$$ +\bar {\mathbf {H}} ^ {p} = \operatorname {M L P} _ {a} \left(\left[ \mathbf {H} ^ {e}; \mathbf {L} ^ {c}; \mathbf {L} ^ {r} \right]\right) \tag {7} +$$ + +$$ +\mathbf {P} _ {t} ^ {a} = \operatorname {S i g m o i d} \left(\bar {\mathbf {H}} ^ {p} \mathbf {h} _ {t} ^ {d}\right) \tag {8} +$$ + +where $\mathbf{P}_t^a\in \mathbb{R}^{n_x + n_c + n_r}$ . The training objective function for this auxiliary task is: + +$$ +\mathcal {L} _ {a} = - \sum_ {t = 1} ^ {| Y |} \sum_ {i = 1} ^ {n _ {x} + n _ {c} + n _ {r}} \left[ \mathbf {Q} _ {t, i} \log \left(\mathbf {P} _ {t, i} ^ {a}\right) + \right. \tag {9} +$$ + +$$ +\left. \left(1 - \mathbf {Q} _ {t, i}\right) \log \left(1 - \mathbf {P} _ {t, i} ^ {a}\right) \right] +$$ + +where $\mathbf{P}_{t,i}^{a}$ and $\mathbf{Q}_{t,i}$ are the $i$ -th elements in $\mathbf{P}_t^a$ and $\mathbf{Q}_t$ , respectively. + +During training, we combine $\mathcal{L}_a$ with the loss function of the basic model $\mathcal{L}_b$ as the joint training objective. + +Incorporating Constraints in Inference. However, although incorporating constraints in training with multi-task learning can reduce the invalid predictions, it can not avoid this issue completely. Thus, during inference, we impose hard constraints to ensure that the prediction at each time step is valid. + +More precisely, we directly regard $\mathbf{Q}_t$ as binary masks to set the probabilities of all the invalid pointer indexes to zero by Hadamard product: + +$$ +\hat {\mathbf {P}} _ {t} = \mathbf {Q} _ {t} \odot \mathbf {P} _ {t} \tag {10} +$$ + +where $\hat{\mathbf{P}}_t\in \mathbb{R}^{n_x + n_c + n_r}$ is the constrained pointer distribution. Finally, we use this constrained pointer distribution instead of $\mathbf{P}_t$ to predict the target sequence. + +
PDTSIxIcIr
Q13[1,1,1,1,1,1,1,1,1,1,1,1,1,1][0,0,0][0,0]
Q25[0,0,0,1,1,1,1,1,1,1,1,1,1][0,0,0][0,0]
Q3C.[0,0,0,0,0,0,0,0,0,0,0,0,0][1,1,1][0,0]
Q410[1,1,0,0,0,1,1,1,1,1,1,1,1,1][0,0,0][0,0]
Q514[0,0,0,0,0,0,0,0,0,1,1,1,1][0,0,0][0,0]
Q6P.[0,0,0,0,0,0,0,0,0,0,0,0,0][1,1,1][0,0]
Q7S.[0,0,0,0,0,0,0,0,0,0,0,0,0,0][0,0,0][1,1]
+ +Table 1: Proxy distributions (PD) for the example given in Figure 1, whose target sequence (TS) is [3,5,Claim,10,14, Premise, Support]. $I^{x}$ , $I^{c}$ and $I^{r}$ respectively denote the range of the token indexes, AC category indexes and the AR category indexes. C., P. and S. are the abbreviations for Claim, Premise and Support. + +# 4.3 Reconstructed Positional Encoding + +Similar to the findings in Zhang et al. (2022), we argue that there are order biases in the basic model described in Section 4.1 due to the autoregressive generation paradigm. In particular, the order of tuples in the target sequence $Y$ is fixed, but there are actually no order relations among these tuples. Therefore, when the basic model generates the target sequence, the tuples that have been generated can have undesired effects on the tuples that are currently being generated. Intuitively, the positional encoding (PE) of BART's decoder is closely related to the order biases, since it represents the order information of the target sequence. Thus, to alleviate this issue, we propose to replace the original PE scheme in the BART decoder with a reconstructed positional encoding (RPE) scheme. + +Original PE of BART's decoder. We denote the original position index for the target sequence $Y$ as $Y^{p} = [1,2,\dots ,|Y|]$ , where each position index will be transformed into a positional embedding vector by the BART's embedding layer. + +Reconstruction of Original PE. We substitute the original position indexes $Y^{p}$ with $\hat{Y}^p = [T_1^p, T_2^p, \ldots, T_{|S|}^p]$ , where $T_i^p = [1, 1, 2, 1, 1, 2, 2]^2$ represent the position index sequence of the $i$ -th tuple $T_i = [s_i^{tc}, e_i^{tc}, c_i^{tc}, s_i^{sc}, e_i^{sc}, c_i^{sc}, r_i]$ in the target sequence. The rationale behind this design is two-fold: 1) From the intra-tuple perspective, for each tuple, we set an identical position index for all span-related elements (i.e. $s_i^{sc}, e_i^{sc}, s_i^{tc}$ and $e_i^{tc}$ ) and another identical position index for all category-related elements (i.e. $c_i^{sc}, c_i^{tc}$ and $r_i$ ). This enables + +the model to better learn the difference between the two kinds of elements. 2) From the inter-tuple perspective, unlike the original positional encoding scheme where each tuple has a unique position index sequence, we assign an identical position index sequence (i.e. $[1,1,2,1,1,2,2]$ ) to all tuples. In this way, the order information among different tuples existing in the original positional encoding scheme can be eliminated, thus reducing the effect caused by the order bias. + +# 5 Experimental Setups + +# 5.1 Datasets + +We evaluate our proposed model on two public AM benchmarks, that is, Argument Annotated Essays (AAE) (Stab and Gurevych, 2017) and Consumer Debt Collection Practices (CDCP) (Niculae et al., 2017). + +The AAE benchmark consists of 402 persuasive essays annotated with three types ACs (MajorClaim, Claim, Premise) and four types of ARs (Support, Attack, For, Against). Note that, in our experiments, we respectively convert For and Against to Support and Attack according to the stance polarity. The AC and AR category label lists of AAE are $L^{c} =$ [MajorClaim, Claim, Premise] and $L^{r} =$ [Support, Attack]. Each essay in AAE contains several paragraphs, and there are 1,833 paragraphs in total (369 paragraphs are reserved for testing). Moreover, AAE is a tree structured benchmark, where ACs and ARs are constrained to form one or more directed trees within each paragraph. + +The CDCP benchmark consists of 731 argumentative user comments about rule proposals, and 150 of them are held out for testing. In this benchmark, there are five types of ACs and two types of ARs with the category label lists $L^{c} =$ [Fact, Testimony, Value, Policy, Reference] and $L^{r} =$ [Reason, Evidence]. Unlike the AAE benchmark, CDCP is a non-tree structured benchmark, where ACs and ARs in a comment can form a directed graph. + +# 5.2 Baselines + +We compare our proposed model with following baselines: + +- ILP: A feature-based approach which jointly optimizes the subtasks of AM by Integer Linear Programming (ILP) (Persing and Ng, 2016; Eger et al., 2017). +- LSTM-Parser: A neural dependency parser-based on stack LSTM, which is proposed by Dyer et al. (2015) and is applied to the end-to-end AM task in Eger et al. (2017). +- LSTM-ER: An end-to-end relation extraction model combining both tree-structured and sequential LSTM (Miwa and Bansal, 2016), which is adapted for extracting argument structure by Eger et al. (2017). +- BiPAM: Another dependency parsing-based model for end-to-end AM, which is based on a biaffine neural network (Ye and Teufel, 2021). Note that, this model uses BERT-Base (Devlin et al., 2019) as base model, which has a similar number of parameters with the BART-Base model we adopted. +- BiPAM-syn: The BiPAM model enhanced by explicit syntactic information produced by the Stanford syntactic dependency parser (Manning et al., 2014), which is the current state-of-the-art method. +- BART-B: The basic model described in Section 4.1, which is similar to the model in (Yan et al., 2021a). + +# 5.3 Evaluation Metrics + +Following previous works (Persing and Ng, 2016; Eger et al., 2017; Ye and Teufel, 2021), we employ micro F1 score to evaluate both the ACI (C-F1) and ARI (R-F1) task. + +More precisely, for ACI, the true positive for calculating the C-F1 score is defined as the number of the predicted ACs that exactly match a gold standard AC, i.e., their boundaries and AC category labels are exactly the same. Similarly, for ARI, the true positive for calculating the R-F1 score is defined as the number of the predicted ARs that exactly match a gold standard AR, i.e., their source ACs, target ACs, and AR category labels are all identical. + +# 5.4 Implementation Details + +Following Ye and Teufel (2021), for the AAE benchmark, we train our model on the paragraph level since most ARs are within a single paragraph. + +
DatasetMethodsC-F1R-F1
AAEILP62.6134.74
LSTM-Parser58.8635.63
LSTM-ER70.8345.52
BiPAM72.9045.90
BiPAM-syn73.5046.40
BART-B73.6147.93
Ours75.9450.08
CDCPBiPAM*41.1510.34
BART-B56.1513.76
Ours57.7216.57
+ +For both AAE and CDCP benchmarks, we randomly choose $15\%$ of the training set for validation. All experiments are conducted ten times with different random initializations, and the average scores are reported. Notably, there are few isolated ACs that are not involved in any AR. For such ACs, we introduce a special "none" token to construct a pseudo tuple. For example, if the Claim in Figure 1 is an isolated AC, then its corresponding pseudo tuple is [3, 5, Claim, none, none, none, none]. + +We use the pre-trained BART-Base as the base model. The learning rate is set to 5e-5 and 8e-5 for AAE and CDCP benchmarks, respectively. AdamW (Loshchilov and Hutter, 2019) is used for optimization with $(\beta_{1} = 0.9, \beta_{2} = 0.999, \epsilon = 1e - 8)$ . We use a warm-up strategy with the warm-up ratio set to 0.01. Each MLP contains 2 layers with a hidden size of 768. Moreover, the dropout rate is set to 0.3 and the batch size is set to 32. Following Yan et al. (2021a), the hyperparameter $\alpha$ is set to 0.5, and beam search is used for decoding during inference with a beam size of 4. We train our model 75 epochs and select the best checkpoint base on the average of C-F1 and R-F1 on the validation set. + +# 6 Results and Analysis + +# 6.1 Main Results + +Table 2 shows the overall performance of the baselines and our proposed model. Our model significantly $(p < 0.01)$ outperforms the BiPAM-syn model by at least $2.44\%$ and $3.68\%$ on the C-F1 and R-F1 scores, respectively, achieving state-of-the-art performance on the AAE benchmark. On the CDCP benchmark, our model also outperforms the BiPAM by a large margin $(p < 0.01)$ . Also, the + +Table 2: Experimental results with baselines. The best scores are in bold. * denotes our implementations. + +
DatasetMethodsC-F1R-F1
AAEOurs75.9450.08
w/o RPE74.2748.22
w/o CPMT75.3949.27
w/o CPMI75.3349.36
w/o CPM74.0748.28
CDCPOurs57.7216.57
w/o RPE58.1315.11
w/o CPMT57.1115.14
w/o CPMI56.0615.70
w/o CPM55.9514.67
+ +Table 3: The results of ablation experiments. RPE denotes the reconstructed positional encoding strategy. CPMT and CPMI are the abbreviation for the constrained pointer mechanism in training and inference stage, respectively. + +basic BART model with the vanilla pointer mechanism (BART-B) can already surpass the current state-of-the-art model, BiPAM-syn, indicating that it might be more appropriate to formalize the end-to-end AM task as a generation task instead of a dependency parsing task. The performance of BART-B can be further improved by introducing our proposed RPE and CPM. In addition, it is worth noting that both LSTM-ER and BiPAM-syn are enhanced with explicit syntactic information, while our model does not require any other information except the input text and is still able to achieve significantly better results. + +# 6.2 Ablation Study + +We perform ablation experiments to reveal the effect of each module in our model on both the AAE and CDCP benchmarks. The results are shown in Table 3. Overall, all of our proposed strategies can bring performance improvements. In particular, applying the RPE to our generative model contributes about $1.86\%$ and $1.46\%$ R-F1 scores on AAE and CDCP, respectively, showing the effectiveness of our proposed RPE for alleviating the order biases. Surprisingly, RPM can slightly decrease the C-F1 score on CDCP. One big factor for this is that some samples in CDCP contain too many ARs, resulting in very long target sequences, and RPM may be unfavorable to capture such long-term dependency. Also, we can observe that, in both the training and inference stages, CPM contributes significantly on the model performance. Concretely, either removing CPMT or CPMI causes performance degrada + +
MethodsAAECDCP
LengthOrderOverlapLengthOrderOverlap
Ours0.0%0.0%0.0%0.0%0.0%0.0%
w/o CPMI0.5%1.9%3.3%1.3%5.2%11.2%
w/o CPM0.9%3.1%4.5%2.3%5.4%12.3%
w/o CPM RPE0.2%5.9%7.2%1.5%6.8%13.6%
+ +Table 4: Invalid predictions analysis on the test set. + +
IDRPEAAECDCP
C-F1R-F1C-F1R-F1
1[1,1,2,1,1,2,2]75.9450.0857.7116.46
2[1,1,2,1,1,2,3]75.3849.6658.1816.23
3[1,2,3,4,5,6,7]75.8449.1657.2915.03
4[1,2,3,1,2,3,3]75.3249.0157.7716.36
5[1,2,3,1,2,3,4]75.3348.9158.4216.04
6iden.74.8348.7856.4315.31
7iden. w/dist.74.9348.8256.6116.03
8original74.0748.2258.1315.11
+ +Table 5: Effect of different RPE. Here, "original" denotes using original positional encoding of BART decoder. "iden." denotes the positional embedding of each token in the target sequence is identical. "iden. w/dist." denotes that the positional embeddings are identical within each tuple, but distinct among tuples. Each list in row 1-5 indicates the position index sequence for each tuple, which is identical among tuples. + +tion. Further, removing both of them (w/o CPM) results in further decreases, showing that CPMT and CMPI can collaborate properly with each other to gain more performance improvement. + +# 6.3 Invalid Predictions Analysis + +According to Yan et al. (2021a,b), adapting generative models to IE tasks suffers from the issue of invalid prediction, since the generation of the target sequences is not fully controllable. Thus, to better demonstrate why our proposed RPE and CPM work, we carry out a detailed analysis of the invalid predictions. For our end-to-end AM task, we define three types of invalid predictions: 1) Invalid Length: The length of a valid tuple should be 7. 2) Invalid Order: In each tuple, the start index of an AC must be smaller than its end index. 3) Invalid Overlap: The predicted source and target AC spans in each tuple should not overlap with each other. + +For each predicted target sequence in the test set, we consider it as an invalid prediction if it contains one of the aforementioned invalid types. The percentage of each type of invalid prediction is + +shown in Table 4. Our proposed model can avoid all the invalid predictions because of the hard constraints imposed by CPMI. Compared to removing both CPMI and CPMT (w/o CPM), only applying constraints by multi-task learning during training (w/o CPMI) results in fewer invalid predictions. By comparing (w/o CPM) and (w/o CPM & RPE), we discover that our proposed RPE can reduce the invalid order and invalid overlap predictions. We argue that the order biases can disrupt the decoding process, causing the model to produce more invalid predictions, whereas RPE can alleviate this issue. However, RPE can increase the invalid length predictions, probably because the original PE with strong order biases is more favorable for controlling the prediction length. + +# 6.4 Effect of Different RPE + +To find an appropriate positional encoding scheme for the decoder of our model, we explored various methods. As shown in Table 5, we can conclude that it is better to assign one same position index to all span-related elements (i.e. $s_i^{sc}$ , $e_i^{sc}$ , $s_i^{tc}$ and $e_i^{tc}$ ), and set another position index to all category-related elements (i.e. $c_i^{sc}$ , $c_i^{tc}$ and $r_i$ ) (row 1). This is due to the fact that the span-related elements and the category-related elements have intrinsically different meanings, where the former are used to recognize the locations and boundaries of ACs, and the latter are used to identify the categories of ACs and ARs. This fact can also be confirmed in row 3 and row 6, where setting distinct or identical position index for all elements both decreases the performance. In addition, using different position index patterns among tuples (row 7 and row 8) does not work well, which further confirms the negative effect caused by the order biases between tuples. Not surprisingly, keeping the original PE of BART (row 8) achieves the worst results due to the order biases. We use [1, 1, 2, 1, 1, 2, 2] as our final RPM since it yields the best R-F1 scores on both AAE + +and CDCP. + +# 7 Conclusion + +In this paper, we transform the end-to-end AM task into a generation task and apply the pre-trained BART model with a pointer mechanism to solve it. To better adapt this generative model to the end-to-end AM task, we replace the original positional encoding of the BART decoder with our proposed reconstructed positional encoding. On the other hand, we present a constrained pointer mechanism to further improve our model, which is achieved by multi-task learning during training and constrained decoding during inference. The extensive experimental results and detailed analysis demonstrate the superiority of our proposed method. + +# Limitations + +Although our proposed reconstructed positional encoding can alleviate the order biases problem, it may not be eliminated completely because the order of the target sequence is still fixed during training. Therefore, for future work, we plan to explore better approaches to address the order biases issue. + +In addition, our model has the problem of generating repetitive tuples. Although this does not affect the performance, it can increase the inference time. Therefore, we will also investigate methods to mitigate the repetitive generation problem in future work. + +# Acknowledgments + +This work was partially supported by the National Natural Science Foundation of China 62006062 and 62176076, Shenzhen Foundational Research Funding JCYJ20200109113441941, JCYJ20210324115614039, the Major Key Project of PCL2021A06, Guangdong Provincial Key Laboratory of Novel Security Intelligence Technologies 2022B1212010005. + +# References + +Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. + +Jianzhu Bao, Chuang Fan, Jipeng Wu, Yixue Dang, Ji-achen Du, and Ruifeng Xu. 2021. A neural transition-based model for argumentation mining. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 6354-6364. Association for Computational Linguistics. +Roy Bar-Haim, Yoav Kantor, Lilach Eden, Roni Friedman, Dan Lahav, and Noam Slonim. 2020. Quantitative argument summarization and beyond: Cross-domain key point analysis. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 39-49. Association for Computational Linguistics. +Filip Boltuzic and Jan Najder. 2014. Back up your stance: Recognizing arguments in online discussions. In Proceedings of the First Workshop on Argument Mining, hosted by the 52nd Annual Meeting of the Association for Computational Linguistics, ArgMining@ACL 2014, June 26, 2014, Baltimore, Maryland, USA, pages 49-58. The Association for Computer Linguistics. +Artem N. Chernodub, Oleksiy Oliynyk, Philipp Heidenreich, Alexander Bondarenko, Matthias Hagen, Chris Biemann, and Alexander Panchenko. 2019. TARGER: neural argument mining at your fingertips. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28 - August 2, 2019, Volume 3: System Demonstrations, pages 195-200. Association for Computational Linguistics. +Oana Cocarascu and Francesca Toni. 2017. Identifying attack and support argumentative relations using deep learning. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 1374-1379. Association for Computational Linguistics. +Leyang Cui, Yu Wu, Jian Liu, Sen Yang, and Yue Zhang. 2021. Template-based named entity recognition using BART. In Findings of the Association for Computational Linguistics: ACL/IJCNLP 2021, Online Event, August 1-6, 2021, volume ACL/IJCNLP 2021 of Findings of ACL, pages 1835-1845. Association for Computational Linguistics. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171-4186. Association for Computational Linguistics. + +Timothy Dozat and Christopher D. Manning. 2018. Simpler but more accurate semantic dependency parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 2: Short Papers, pages 484-490. Association for Computational Linguistics. +Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transition-based dependency parsing with stack long short-term memory. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, ACL 2015, July 26-31, 2015, Beijing, China, Volume 1: Long Papers, pages 334-343. The Association for Computer Linguistics. +Steffen Eger, Johannes Daxenberger, and Iryna Gurevych. 2017. Neural end-to-end learning for computational argumentation mining. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 11-22. Association for Computational Linguistics. +Alexander R. Fabbri, Faiaz Rahman, Imad Rizvi, Borui Wang, Haoran Li, Yashar Mehdad, and Dragomir R. Radev. 2021. *Convosumm: Conversation summarization benchmark and improved abstractive summarization with argument mining*. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics* and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 6866-6880. Association for Computational Linguistics. +Hao Fei, Donghong Ji, Bobo Li, Yijiang Liu, Yafeng Ren, and Fei Li. 2021. Rethinking boundaries: End-to-end recognition of discontinuous mentions with pointer networks. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 12785-12793. AAAI Press. +Daniel Fernández-González and Carlos Gómez-Rodríguez. 2020. Transition-based semantic dependency parsing with pointer networks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 7035-7046. Association for Computational Linguistics. +Eirini Florou, Stasinos Konstantopoulos, Antonis Koukourikos, and Pythagoras Karampiperis. 2013. Argument extraction for supporting public policy formulation. In Proceedings of the 7th Workshop on Language Technology for Cultural Heritage, Social + +Sciences, and Humanities, LaTeCH@ACL 2013, August 8, 2013, Sofia, Bulgaria, pages 49-54. The Association for Computer Linguistics. +Debanjan Ghosh, Aquila Khanam, Yubo Han, and Smaranda Muresan. 2016. Coarse-grained argumentation features for scoring persuasive essays. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 2: Short Papers. The Association for Computer Linguistics. +Debanjan Ghosh, Smaranda Muresan, Nina Wacholder, Mark Aakhus, and Matthew Mitsui. 2014. Analyzing argumentative discourse units in online interactions. In Proceedings of the First Workshop on Argument Mining, hosted by the 52nd Annual Meeting of the Association for Computational Linguistics, ArgMining@ACL 2014, June 26, 2014, Baltimore, Maryland, USA, pages 39-48. The Association for Computer Linguistics. +Theodosios Goudas, Christos Louizos, Georgios Petasis, and Vangelis Karkaletsis. 2014. Argument extraction from news, blogs, and social media. In Artificial Intelligence: Methods and Applications - 8th Hellenic Conference on AI, SETN 2014, Ioannina, Greece, May 15-17, 2014. Proceedings, volume 8445 of Lecture Notes in Computer Science, pages 287-299. Springer. +Xinyu Hua, Zhe Hu, and Lu Wang. 2019. Argument generation with retrieval, planning, and realization. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 2661-2672. Association for Computational Linguistics. +Kuo Yu Huang, Hen-Hsen Huang, and Hsin-Hsi Chen. 2021. HARGAN: heterogeneous argument attention network for persuasiveness prediction. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 13045-13054. AAAI Press. +Yohan Jo, Jacky Visser, Chris Reed, and Eduard H. Hovy. 2019. A cascade model for proposition extraction in argumentation. In Proceedings of the 6th Workshop on Argument Mining, ArgMining@ACL 2019, Florence, Italy, August 1, 2019, pages 11-24. Association for Computational Linguistics. +Khalid Al Khatib, Lukas Trautner, Henning Wachsmuth, Yufang Hou, and Benno Stein. 2021. Employing argumentation knowledge graphs for neural argument generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 4744-4754. Association for Computational Linguistics. + +Tatsuki Kuribayashi, Hiroki Ouchi, Naoya Inoue, Paul Reisert, Toshinori Miyoshi, Jun Suzuki, and Kentaro Inui. 2019. An empirical study of span representations in argumentation structure parsing. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 4691-4698. Association for Computational Linguistics. +John Lawrence and Chris Reed. 2019. Argument mining: A survey. Comput. Linguistics, 45(4):765-818. +Mirko Lenz, Premtim Sahitaj, Sean Kallenberg, Christopher Coors, Lorik Dumani, Ralf Schenkel, and Ralph Bergmann. 2020. Towards an argument mining pipeline transforming texts to argument graphs. In Computational Models of Argument - Proceedings of COMMA 2020, Perugia, Italy, September 4-11, 2020, volume 326 of Frontiers in Artificial Intelligence and Applications, pages 263-270. IOS Press. +Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 7871-7880. Association for Computational Linguistics. +Jialu Li, Esin Durmus, and Claire Cardie. 2020. Exploring the role of argument structure in online debate persuasion. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 8905-8912. Association for Computational Linguistics. +Sha Li, Heng Ji, and Jiawei Han. 2021. Document-level event argument extraction by conditional generation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 894-908. Association for Computational Linguistics. +Marco Lippi and Paolo Torroni. 2015. Context-independent claim detection for argument mining. In Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, IJCAI 2015, Buenos Aires, Argentina, July 25-31, 2015, pages 185-191. AAAI Press. +Linlin Liu, Xiang Lin, Shafiq R. Joty, Simeng Han, and Lidong Bing. 2019. Hierarchical pointer net parsing. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 1007-1017. Association for Computational Linguistics. +Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In 7th International + +Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net. +Xuezhe Ma, Zeong Hu, Jingzhou Liu, Nanyun Peng, Graham Neubig, and Eduard H. Hovy. 2018. Stackpointer networks for dependency parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 1403-1414. Association for Computational Linguistics. +Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, and David McClosky. 2014. The stanford corenlp natural language processing toolkit. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014, June 22-27, 2014, Baltimore, MD, USA, System Demonstrations, pages 55-60. The Association for Computer Linguistics. +Yishu Miao and Phil Blunsom. 2016. Language as a latent variable: Discrete generative models for sentence compression. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 319-328. The Association for Computational Linguistics. +Makoto Miwa and Mohit Bansal. 2016. End-to-end relation extraction using lstms on sequences and tree structures. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. The Association for Computer Linguistics. +Marie-Francine Moens, Erik Boiy, Raquel Mochales Palau, and Chris Reed. 2007. Automatic detection of arguments in legal texts. In *The Eleventh International Conference on Artificial Intelligence and Law*, Proceedings of the Conference, June 4-8, 2007, Stanford Law School, Stanford, California, USA, pages 225-230. ACM. +Gaku Morio, Hiroaki Ozaki, Terufumi Morishita, Yuta Koreeda, and Kohsuke Yanai. 2020. Towards better non-tree argument mining: Proposition-level bi-affine parsing with task-specific parameterization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 3259-3266. Association for Computational Linguistics. +Huy Nguyen and Diane J. Litman. 2015. Extracting argument and domain words for identifying argument components in texts. In Proceedings of the 2nd Workshop on Argumentation Mining, ArgMining@HLT-NAACL 2015, June 4, 2015, Denver, Colorado, USA, pages 22–28. The Association for Computational Linguistics. +Huy V. Nguyen and Diane J. Litman. 2018. Argument mining for improving the automated scoring of persuasive essays. In Proceedings of the Thirty-Second + +AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 5892-5899. AAAI Press. +Vlad Niculae, Joonsuk Park, and Claire Cardie. 2017. Argument mining with structured svms and rnns. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 985-995. Association for Computational Linguistics. +Raquel Mochales Palau and Marie-Francine Moens. 2009. Argumentation mining: the detection, classification and structure of arguments in text. In The 12th International Conference on Artificial Intelligence and Law, Proceedings of the Conference, June 8-12, 2009, Barcelona, Spain, pages 98-107. ACM. +Romain Paulus, Caiming Xiong, and Richard Socher. 2018. A deep reinforced model for abstractive summarization. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net. +Andreas Peldszus. 2014. Towards segment-based recognition of argumentation structure in short texts. In Proceedings of the First Workshop on Argument Mining, hosted by the 52nd Annual Meeting of the Association for Computational Linguistics, ArgMining@ACL 2014, June 26, 2014, Baltimore, Maryland, USA, pages 88-97. The Association for Computer Linguistics. +Isaac Persing and Vincent Ng. 2016. End-to-end argumentation mining in student essays. In *NAACL HLT* 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego California, USA, June 12-17, 2016, pages 1384-1394. The Association for Computational Linguistics. +Peter Potash, Alexey Romanov, and Anna Rumshisky. 2017. Here's my point: Joint pointer architecture for argument mining. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 1364-1373. Association for Computational Linguistics. +Nils Reimers, Benjamin Schiller, Tilman Beck, Johannes Daxenberger, Christian Stab, and Iryna Gurevych. 2019. Classification and clustering of arguments with contextualized word embeddings. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 567-578. Association for Computational Linguistics. +Liliang Ren, Chenkai Sun, Heng Ji, and Julia Hockenmaier. 2021. Hyspa: Hybrid span generation + +for scalable text-to-graph extraction. In Findings of the Association for Computational Linguistics: ACL/IJCNLP 2021, Online Event, August 1-6, 2021, volume ACL/IJCNLP 2021 of Findings of ACL, pages 4066-4078. Association for Computational Linguistics. +Ramon Ruiz-Dolz, José Alemany, Stella Heras Barberá, and Ana García-Fornes. 2021. Transformer-based models for automatic identification of argument relations: A cross-domain evaluation. IEEE Intell. Syst., 36(6):62-70. +Robin Schaefer and Manfred Stede. 2021. Argument mining on twitter: A survey. it Inf. Technol., 63(1):45-58. +Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer-generator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 1073-1083. Association for Computational Linguistics. +Noam Slonim, Yonatan Bilu, Carlos Alzate, Roy Bar-Haim, Ben Bogin, Francesca Bonin, Leshem Choshen, Edo Cohen-Karlik, Lena Dankin, Lilach Edelstein, et al. 2021. An autonomous debating system. Nature, 591(7850):379-384. +Wei Song, Ziyao Song, Lizhen Liu, and Ruiji Fu. 2020. Hierarchical multi-task learning for organization evaluation of argumentative student essays. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI 2020, pages 3875-3881. ijcai.org. +Christian Stab and Iryna Gurevych. 2014. Identifying argumentative discourse structures in persuasive essays. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar; A meeting of SIGDAT, a Special Interest Group of the ACL, pages 46-56. ACL. +Christian Stab and Iryna Gurevych. 2017. Parsing argumentation structures in persuasive essays. Computational Linguistics, 43(3):619-659. +Eva Maria Vecchi, Neele Falk, Iman Jundi, and Gabriella Lapesa. 2021. Towards argument mining for social good: A survey. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 1338-1352. Association for Computational Linguistics. +Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 2692-2700. + +Hao Wang, Zhen Huang, Yong Dou, and Yu Hong. 2020. Argumentation mining on essays at multi scales. In Proceedings of the 28th International Conference on Computational Linguistics, COLING 2020, Barcelona, Spain (Online), December 8-13, 2020, pages 5480-5493. International Committee on Computational Linguistics. +Hang Yan, Junqi Dai, Tuo Ji, Xipeng Qiu, and Zheng Zhang. 2021a. A unified generative framework for aspect-based sentiment analysis. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 2416-2429. Association for Computational Linguistics. +Hang Yan, Tao Gui, Junqi Dai, Qipeng Guo, Zheng Zhang, and Xipeng Qiu. 2021b. A unified generative framework for various NER subtasks. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 5808-5822. Association for Computational Linguistics. +Songlin Yang and Kewei Tu. 2022. Bottom-up constituency parsing and nested named entity recognition with pointer networks. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 2403-2416. Association for Computational Linguistics. +Yuxiao Ye and Simone Teufel. 2021. End-to-end argument mining as bioaffine dependency parsing. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, EACL 2021, Online, April 19 - 23, 2021, pages 669-678. Association for Computational Linguistics. +Shuai Zhang, Yongliang Shen, Zeqi Tan, Yiquan Wu, and Weiming Lu. 2022. De-bias for generative extraction in unified NER task. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 808-818. Association for Computational Linguistics. +Wenxuan Zhang, Yang Deng, Xin Li, Yifei Yuan, Li-dong Bing, and Wai Lam. 2021a. Aspect sentiment quad prediction as paraphrase generation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 9209-9219. Association for Computational Linguistics. +Wenxuan Zhang, Xin Li, Yang Deng, Lidong Bing, and Wai Lam. 2021b. Towards generative aspect-based sentiment analysis. In Proceedings of the 59th Annual Meeting of the Association for Computational + +Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 2: Short Papers), Virtual Event, August 1-6, 2021, pages 504-510. Association for Computational Linguistics. \ No newline at end of file diff --git a/agenerativemodelforendtoendargumentminingwithreconstructedpositionalencodingandconstrainedpointermechanism/images.zip b/agenerativemodelforendtoendargumentminingwithreconstructedpositionalencodingandconstrainedpointermechanism/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..d352064b97c8511a0f3e1d137fbf3aa79f499c41 --- /dev/null +++ b/agenerativemodelforendtoendargumentminingwithreconstructedpositionalencodingandconstrainedpointermechanism/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:12d9fd6ad812e9fbf34f89b4d45766b311330f0e753632c6ae893caabde4486d +size 243855 diff --git a/agenerativemodelforendtoendargumentminingwithreconstructedpositionalencodingandconstrainedpointermechanism/layout.json b/agenerativemodelforendtoendargumentminingwithreconstructedpositionalencodingandconstrainedpointermechanism/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..d03d2c0d349b1de93e162a4c824566f2605a721c --- /dev/null +++ b/agenerativemodelforendtoendargumentminingwithreconstructedpositionalencodingandconstrainedpointermechanism/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0b55e32f5eeb690ae62f84cdb0ba5a373da26a0ea845f04750f3c92d29d85369 +size 484114 diff --git a/agentspecificdeonticmodalitydetectioninlegallanguage/303e9edb-a0f8-4897-a476-e0896b7ef487_content_list.json b/agentspecificdeonticmodalitydetectioninlegallanguage/303e9edb-a0f8-4897-a476-e0896b7ef487_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..d5287fe7de794bc5f10cdb237183064d2fd6600e --- /dev/null +++ b/agentspecificdeonticmodalitydetectioninlegallanguage/303e9edb-a0f8-4897-a476-e0896b7ef487_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:caf228bf03d4d4b41464f2edaea4d67ff4ab54eed625c9ddd70a93176f5e8b09 +size 137743 diff --git a/agentspecificdeonticmodalitydetectioninlegallanguage/303e9edb-a0f8-4897-a476-e0896b7ef487_model.json b/agentspecificdeonticmodalitydetectioninlegallanguage/303e9edb-a0f8-4897-a476-e0896b7ef487_model.json new file mode 100644 index 0000000000000000000000000000000000000000..1ce04775fa91bf70383d0c8134a00214f9dfbac3 --- /dev/null +++ b/agentspecificdeonticmodalitydetectioninlegallanguage/303e9edb-a0f8-4897-a476-e0896b7ef487_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f1025a596e352a98ab5b4d4c1be9e1b551092951484c2a51e1f9cdb6bafbd304 +size 179932 diff --git a/agentspecificdeonticmodalitydetectioninlegallanguage/303e9edb-a0f8-4897-a476-e0896b7ef487_origin.pdf b/agentspecificdeonticmodalitydetectioninlegallanguage/303e9edb-a0f8-4897-a476-e0896b7ef487_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..cf764c5911d14a6416528980c2369026b7bf9c7a --- /dev/null +++ b/agentspecificdeonticmodalitydetectioninlegallanguage/303e9edb-a0f8-4897-a476-e0896b7ef487_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:082f7f6edc329146c7871859c76c61daabbee6dd176d2fab1ec82e5b3fe5345e +size 3438787 diff --git a/agentspecificdeonticmodalitydetectioninlegallanguage/full.md b/agentspecificdeonticmodalitydetectioninlegallanguage/full.md new file mode 100644 index 0000000000000000000000000000000000000000..a7aa01da943d593a6c5068cfbfd05f1aabd04280 --- /dev/null +++ b/agentspecificdeonticmodalitydetectioninlegallanguage/full.md @@ -0,0 +1,601 @@ +# Agent-Specific Deontic Modality Detection in Legal Language + +Abhilasha Sancheti†‡, Aparna Garimella‡, Balaji Vasan Srinivasan‡, Rachel Rudinger† + +†University of Maryland, College Park + +$\ddagger$ Adobe Research + +{sancheti, rudinger} $@$ umd.edu + +{garimell, balsrini} @adobe.com + +# Abstract + +Legal documents are typically long and written in *legalese*, which makes it particularly difficult for laypeople to understand their rights and duties. While natural language understanding technologies can be valuable in supporting such understanding in the legal domain, the limited availability of datasets annotated for deontic modalities in the legal domain, due to the cost of hiring experts and privacy issues, is a bottleneck. To this end, we introduce, LEXDEMOD, a corpus of English contracts annotated with deontic modality expressed with respect to a contracting party or agent along with the modal triggers. We benchmark this dataset on two tasks: (i) agent-specific multi-label deontic modality classification, and (ii) agent-specific deontic modality and trigger span detection using Transformer-based (Vaswani et al., 2017) language models. Transfer learning experiments show that the linguistic diversity of modal expressions in LEXDEMOD generalizes reasonably from lease to employment and rental agreements. A small case study indicates that a model trained on LEXDEMOD can detect red flags with high recall. We believe our work offers a new research direction for deontic modality detection in the legal domain1. + +# 1 Introduction + +A contract is a legal document executed by two or more parties. To sign a contract (e.g., lease agreements, terms of services, privacy policies, EULA, etc.), it is important for these parties to precisely understand their obligations, entitlements, prohibitions, and permissions as described in the contract. However, for a layperson, understanding contracts can be difficult due to their length and the complexity of legalese used. Therefore, a layperson often signs agreements without even reading them (Cole, 2015; Obar and Oeldorf-Hirsch, 2020). + +EX-10.2 3 lease agreement.l.htm LEASE AGREEMENT TO THE REAL ESTATE PURCHASE CONTRACT (SEDALIA) + +![](images/2db45c76e2db0c0abc1cf018ad2dc28501849b32203d365f9b39e6068c1cd3c9.jpg) + +A. LESSOR OWN'S THE PREMISES (HEREINFTER DEFINED). + +B. LESSOR DESIRES TO LEASE THE PREMISES TO LESSOR, AND LESSEE DESIRES TO LEASE THE SAME FROM LESSOR UPON THE TERMS AND CONDITIONS HEREINFTER SET FORTH. + +NOW, THEREFORE, in consideration of the mutual covenants contained herein and other good and valuable consideration, the receipt and sufficiency of which are hereby acknowledged, Lessor and Lessee hereby agree to the following terms and conditions of this Lease. + +# ARTICLE I + +# Rent + +1.1 Base Rent. Commencing on the Commencement Date and until the expiration of the Term, Lessee shall pay Lessor annual base rent in the amount of ONE HUNDRED TWENTY EIGHT THOUSAND TWO HUNDRED FIFTY AND NO/100 DOLLARS (\$128,250.00) (the "Base Rent"), which such Base Rent shall be payable in twelve (12) equal consecutive monthly installments of TEN THOUSAND SIX HUNDRED EIGHTY SEVEN AND 50/100 DOLLARS (\$10,687.50) in advance on the first (18) day of each month to Lessor at 21553 East Apache Street, Catooa, Oklahoma 74015. Notwithstanding the foregoing, the first installment of Base Rent shall be delivered to Lessor upon execution of this Lease. + +Figure 1: Sample contract3 indicating the terminologies used to refer to the elements of a contract. 'shall' triggers obligation for Lessee and entitlement for Lessor. Contracting party or agent is referred to via an "alias" (such as Lessor or Lessee) throughout the contract. + +Having a system which can provide an "at a glance" summary of obligations, entitlements, prohibitions, and permissions to a contracting party (henceforth, "agent")2), will be of great help not only to the agents but also to legal professionals for contract review. While existing language processing and understanding systems can be used for legal understanding, limited availability of annotated datasets in the legal domain due to the cost of hiring experts and privacy issues is a bottleneck. Furthermore, the highly specialized lexical and syntactic features of legalese make it difficult to directly apply systems trained on data from other linguistic domains (e.g., news) to the legal domain. + +For an "at a glance" summary of contracts, we first need to identify the obligations, entitlements, prohibitions, and permissions present in the contract for a given agent. Deontic modality is fre + +sequently used in contracts to express such obligations, entitlements, permissions, and prohibitions of agents (Ballesteros-Lintao et al., 2016). For instance, 'shall', 'shall not', and 'may' is used to express 'obligation/entitlement', 'prohibition', and 'permission' respectively in example (1) below. + +(1) a. Tenant shall pay the rent to the Landlord. + +b. Landlord shall not obtain financing or enter into any agreement affecting the Property. +c. Landlord may continue this Lease in effect after Tenant's abandonment and recover Rent as it becomes due. + +(2) a. Tenant agrees to pay the rent. + +b. Landlord is responsible for maintaining the structural soundness of the house. + +However, existing works for identifying such deontic modality types (henceforth "deontic types") either use rule-based (Wyner and Peters, 2011; Peters and Wyner, 2016; Dragoni et al., 2016; Ash et al., 2020) or data-driven (Neill et al., 2017; Chalkidis et al., 2018) approaches, which cannot be directly used for our purpose. This is because rule-based approaches are not robust as they do not (in practice) capture the rich linguistic variety (e.g., use of non-modal expressions in (2)) and ambiguity of modal expressions (e.g., 'shall' in (1a)). Furthermore, annotated datasets used in the data-driven approaches do not consider multiple deontic types for a sentence and their association with the agent (e.g., (1a) is an instance of 'obligation' for the Tenant and an 'entitlement' for the Landlord). Although, Funaki et al. (2020) introduced a corpus with annotations for rights, obligations, and associated agents, it does not cover all the deontic types. Moreover, different corpora consider different deontic types, lacking an accepted standard. + +In this work, we address these issues through the following contributions: (a) we present a linguistically-informed taxonomy for annotating deontic types in the legal domain, and use the taxonomy to build a corpus (LexDEMOD; §3) of English contracts with two types of annotations: (i) all deontic types expressed in a sentence with respect to an agent, and (ii) spans of modal triggers, i.e., expressions (e.g., bold-faced phrases in examples (1) and (2)) that evoke the modal meaning; (b) we benchmark the corpus on two tasks: (i) agent-specific multi-label deontic modality classification (§6), and (ii) agent-specific deontic modality and + +trigger span detection (§7) using state-of-the-art Transformer (Vaswani et al., 2017) models, and (c) we perform transfer learning experiments (§8) to investigate the generalizability of diverse modal expressions in LEXDEMOD and a case study to detect red flags (§9) in lease agreements. + +# 2 Related Work + +# 2.1 NLP in the Legal Domain + +Prior works have investigated a number of tasks in legal NLP domain including legal judgement prediction (Aletras et al., 2016; Luo et al., 2017; Zhong et al., 2018; Chen et al., 2019; Chalkidis et al., 2019), legal entity recognition and classification (Cardellino et al., 2017; Chalkidis et al., 2017; Angelidis et al., 2018), legal question answering (Duan et al., 2019; Zhong et al., 2020), and legal summarization (Hachey and Grover, 2006; Bhattacharya et al., 2019; Manor and Li, 2019). While legal NLP covers a wide range of tasks, limited efforts have been made for contract review despite it being one of the most time-consuming and tedious tasks. Leivaditi et al. (2020) introduced a benchmark for lease contract review for detecting named entities and red flags. Hendrycks et al. (2021) introduced a large expert-annotated dataset and Tuggener et al. (2020) a large semiautomatically annotated dataset for provision type classification across a variety of contract types. However, these datasets do not contain deontic type annotations which the focus of this work. + +# 2.2 Rights and Obligation Extraction + +Existing works either propose rule-based methods (Wyner and Peters, 2011; Peters and Wyner, 2016) or use a combination of NLP approaches such as syntax and dependency parsing (Dragoni et al., 2016) for extracting rights and duties from legal documents such as Federal code regulations, European directives or customer protection codes. Another line of works (Bracewell et al., 2014; Neill et al., 2017; Chalkidis et al., 2018) use machine learning and deep learning approaches to predict deontic types with the help of small datasets. However, rule-based approaches are not robust due to the rich linguistic variety and ambiguity of modal expressions, and the annotated datasets do not consider multiple deontic types for a sentence and their association with the agents which is important for contract understanding. Matulewska (2010) analyzed contracts from different countries and types + +considering fine-grained deontic modalities covered in them but only considers obligation, permission and prohibition with temporal constraints. Ash et al. (2020) propose a rule-based unsupervised approach to identify deontic types with respect to an agent and compute statistics for rights and duties for an agent. However, rule-based approaches have limitations as mentioned above. Recently, Funaki et al. (2020) curate an annotated corpus of contracts for recognizing rights and obligations along with the agents using LegalRuleML (Athan et al., 2013). However, the corpus is not publicly available, does not annotate for modal triggers, and does not cover all the deontic types expressed in a contract. + +# 2.3 Modality Annotation and Detection + +Modality refers to the linguistic ability to describe alternative ways the world could be and is commonly expressed by modal auxiliaries such as shall, will, must, can, and may. Existing studies have proposed various modality annotation schemas for Portuguese (Hendrickx et al., 2012; Avila et al., 2015) and applied (Quaresma et al., 2014) it to build machine learning models to identify the deontic types. However, it does not cover all the deontic types and restrict the identification to three modal auxiliaries. While Athan et al. (2013) and Nazarenko et al. (2018) propose XML-based annotation schema to formally represent legal text in English and highlight the various interpretive issues that arose during the annotation, it does not consider trigger annotation. Although Rubinstein et al. (2013) and Pyatkin et al. (2021) consider trigger and modality type (not restricted to modal auxiliaries) annotations at different levels of granularity, fine-grained deontic types as well as association with the agent is not considered. As different studies consider different deontic types lacking an accepted standard, we present a linguistically-informed taxonomy for annotating deontic types and their triggers. + +# 3 LEXDEMOD Dataset Curation + +We first describe the dataset source (§3.1) followed by pre-processing (§3.2), annotation protocol (§3.3), and the quantitative and qualitative analysis (§3.4) of the collected dataset. + +# 3.1 Dataset Source + +We use the contracts available in the LEDGAR corpus (Tuggener et al., 2020) which comprises material contracts (Exhibit-10), such as agree + +ments (e.g., shareholder/employment/lease/non-disclosure), crawled from Electronic Data Gathering, Analysis, and Retrieval (EDGAR) system. EDGAR is maintained by the U.S. Securities and Exchange Commission $(\mathrm{SEC}^4)$ . The documents filed on SEC are public information and can be redistributed without a further consent.[5] + +# 3.2 Contract Pre-processing + +The raw contracts in LEDGAR are available in html format. We extract all the paragraphs (henceforth, "provisions") from the html (identified by $<\mathsf{p}>$ or $<\mathsf{div}>$ tags) of a contract, and heuristically filter the provisions defining any terminologies (identified by presence of phrases such as 'shall mean', 'means', 'shall have the meaning', 'has the meaning', etc.). As contracts have a hierarchical structure (e.g., bullets and sub-bullets), we prepend (see A.1) the higher level context with the lower level (e.g., combining sub-bullets with its context in the main bullet). After this, we heuristically extract the type of the contract (e.g., lease or employment contract) and the alias (e.g., "Lessee" in Figure 1) used to refer the contracting parties from the content of the contracts. + +Contract Type Extraction. We heuristically scan the first $20^{6}$ provisions to identify the type of the contract using regular expressions (all uppercase characters and presence of 'AGREEMENT'). + +Agent Alias Extraction. Agent in a contract can be either a person or a company. Therefore, we scan the first 20 provisions of a contract to find company mentions using lexnlp (Bommarito II et al., 2021)7 and named entities with 'person' tag using spaCy (Honnibal et al., 2020) library. We then use regular expression (alias is mentioned in parenthesis (see Figure 1) following the agent mention) to extract the alias used to refer to the found agents in the provisions. For each type of contract, we manually select the most frequently occurring aliases extracted after using the regular expression. + +We collect all the sentences of provisions belonging to a contract wherein alias for an agent is found. We posit that if a sentence does not contain an alias, then deontic type is not expressed for an agent. For instance, 'Any such month-to-month tenancy shall be subject to every other term, covenant and + +
Deontic TypeDescription
Obligation (Obl)Agent is required to have or do something
Entitlement (Ent)Agent has the right to have or do something
Prohibition (Pro)Agent is forbidden to have or do something
Permission (Per)Agent is allowed to have or do something
No Obligation (Nobl)Agent is not required to have or do something
No Entitlement (Nent)Agent has no right to have or do something
+ +Table 1: Taxonomy ${}^{8}$ for deontic type. + +agreement contained herein.' is a rule and does not specifically mention any deontic type for an agent. + +# 3.3 Annotation Protocol + +Annotation task description. We propose agent-specific deontic modality detection tasks that address the following issues: (i) non-robustness of rule-based extraction of rights and duties as it cannot capture the rich linguistic variety and ambiguity of modal expressions; (ii) lack of standard taxonomy for annotating fine-grained deontic types; (iii) non-association of deontic type with the agent during annotation, and (iv) considering deontic type detection as a single class classification task. Consider, for instance, the following: + +(3) a. [Tenant] Tenant shall obl pay the rent to the Landlord and may per use the parking space. +b. [Landlord] Tenant shall ent pay the rent to the Landlord and may use the parking space. + +In these examples, the words in bold evoke the modal expression, which we call a trigger. For Tenant as the [Agent], an obligation (obl) and a permission (per) are expressed in the sentence (3a), and an entitlement (ent) for the Landlord (3b). + +Our data collection is performed via crowdsourcing on Amazon Mechanical Turk (AMT). We ask the workers to provide two types of annotations for each sentence with respect to an agent (referred to via an alias): (i) select all the deontic types expressed, and (ii) select trigger word(s) (as span) for each selected deontic type. If a sentence contains more than one agent, we duplicate it to get separate annotations with respect to each agent so that the workers focus their understanding with respect to one agent at a time. This task design choice helps in better estimation of the time taken to do each HIT (Human Intelligence Task) as the number of agent mentions in a sentence can vary. This also simplifies the custom annotation interface (see Figure 6) built to get the annotations. Detailed guidelines for annotation are provided in A.2 (Figure 5). + +Taxonomy for deontic type. We base our tax- + +onomy for deontic type annotation (Table 1) on the deontic logic theory of Von Wright (1951). Von Wright's categorical modals are best suited for legal contracts which talk about rights and duties of contracting parties (Ballesteros et al., 2020; Matulewska, 2010) than other categorizations (Chung, 1985; Palmer, 2001; Jespersen, 2013) which are not found in contracts (e.g., desiderative, hortative). We also include no-obligation and no-entitlement categories to cover all possible modalities which were found on manual inspection of contracts. + +Annotation process and requirements. As legalese is syntactically complex and difficult to understand, the annotation task is quite intricate in nature. To ensure that the workers properly understand the task, we first conduct a qualification test which explains the task with the help of right and wrong annotation examples along with explanations and contains 10 questions. The qualification test is open to workers with $\geq 95\%$ approval rate and $\geq 1,000$ approved HITs. Finally, we select 25 workers who answered all the qualification questions (details in A.3) correctly. + +The main annotation task consists of 12 sentences per HIT, including 2 quality check questions to ensure workers provide good annotations. We publish 3 pilot HITs, with revised guidelines in each of them. We also manually check the annotations (selected randomly) to ensure quality and provide feedback to the workers. We observe a learning curve for the task and considerable variation in the time taken per HIT $(7.5 \pm 1.5\mathrm{~min})$ . After the pilots, the annotations were majorly performed by 3 workers. We publish a batch of 50 HITs with 3 annotations for each HIT from the 3 workers. As we found good inter-annotator agreement between the 3 workers (see §3.4), we collect only one annotation per HIT for the remaining HITs to get more sentences annotated within reasonable time. + +# 3.4 Annotated Dataset Statistics and Analysis + +Each contract contains $202.6(\pm 162.4)$ provisions on average (standard deviation in parentheses), with $2.2(\pm 1.7)$ sentences per provision; each contract consists of $306.4(\pm 235.8)$ sentences on an average. Among these, $75.8(\pm 14.4)\%$ of sentences per contract have at least one agent mentioned in them, with an average length of $65(\pm 47)$ . + +We collect a total of 8,230 trigger span annota + +
Split#Sent.#SpansOblEntProPerNoblNentNone
Train42825279184112313432892652391071
Dev330421176862018212278
Test177719525754186416710188539
+ +![](images/3bbd5584c8255079b043bdd98d7297058dbd4915181b75ab760810fe24fa641e.jpg) +Figure 2: Distribution of deontic type with respect to Tenant and Landlord for lease agreements. + +tions for 7,092 sentences from 23 lease contracts after considering HITs for which both the quality questions are correctly answered. For duplicate sentences, we retain those annotations that are inline with one of the authors (and discard $14.1\%$ of duplicated ones; a few examples are provided in in A.4). The test set comprises of sentences from 5 contracts including those for which we have 3 annotations per sentence, and rest of the sentences are divided into train and development sets such that the sentences from the same contract belong to the same set. We combine the 3 annotations for a subset of sentences in the test set using majority voting9 for deontic type and by taking a union10 of annotated trigger spans for the majority deontic types. The average inter-annotator agreement for each deontic type computed with Krippendorff's $\alpha$ (Krippendorff, 2018) is substantial ( $\alpha = 0.65$ ) given the complexity of the task (see Table 7 for type-wise agreement). For trigger span annotation, the token-level inter-annotator agreement for the majority deontic types for a sentence is also substantial ( $\alpha = 0.71$ ). The fine-grained dataset statistics after filtering and resolving disagreements are presented in Table 2. + +Qualitative analysis. Figure 2 shows the distribution of annotated spans11 over deontic types with respect to each agent (Landlord and Tenant for + +Table 2: Dataset Statistics. + +
TypeTop 10 triggers
Oblshall, will, agrees, agree, acknowledges, acknowledge, represents and warrants, shall be responsible for, undertakes, will be responsible for
Entshall, will, agrees, shall have the right to, shall be entitled to, represents and warrants, acknowledges, waives no rights, shall not, retains all other rights, will be entitled to
Proshall not, will not, may not, nor shall, not to be, neither lessor nor lessee may, in no event shall, nor will, will not allow, nor may
Permay, is permitted, will allow, has the right, shall, or at landlord's option, shall be permitted to, shall be allowed
Noblshall not be liable, shall not be obligated to, shall not be required to, shall, shall have no obligation to, in no event shall landlord be obligated to, waives, shall not, shall have no liability
Nentshall, shall have no right to, waives no rights, shall not, shall have no obligation to, waives, shall not be required, shall not be obligated, waive the right, shall not have the right to
+ +Table 3: Top 10 triggers for each deontic type in decreasing order of frequency. + +![](images/1289955ddc07c438e2e77e6cb94e6028205dced1364a9ab06d00f51ec2d410b5.jpg) +Figure 3: Frequency-based wordcloud of all the triggers. + +lease agreements $^{12}$ ) in the train set. Interestingly, tenants have more obligations and prohibitions, and fewer entitlements or permissions than landlords. $17.3\%$ of the sentences have multiple trigger annotations, $48.6\%$ of these sentences express multiple deontic types. $24.8\%$ of the sentences do not express any deontic type with respect to a given agent. The dataset contains 383 unique triggers across all the deontic types. Table 3 lists the top 10 triggers for each deontic type, and Figure 3 shows the frequency-based wordcloud of the annotated triggers. 'Shall' constitutes $44.6\%$ of the annotated triggers used to express not only obligation but entitlement, no-obligation, and no-entitlement as well. Prohibitions may be expressed using negation words $(14.9\%)$ between the context (e.g., 'neither lessor nor lessee may') of a sentence. While modal auxiliaries (e.g., shall, will, may) are more frequently used, $45.2\%$ of the total unique triggers are non-modal expressions (e.g., agrees, rep + +resents) covering $20.3\%$ of the annotated trigger spans. This shows that LEXDEMOD covers a wide variety of linguistic expressions of deontic modality in legalese, not restricted to modal auxiliaries. Annotated samples from the dataset are provided in Table 15 in A.9. + +# 4 Proposed Benchmarking Tasks + +Having established the rich variety and coverage of linguistic expressions for deontic modality in LEXDEMOD, we benchmark the corpus on the proposed two tasks defined below: + +(i) Agent-specific multi-label deontic modality classification. This task aims at predicting all the deontic types expressed in a sentence with respect to an agent. We pose this as a multi-label classification task conditioned on a sentence and an agent. (ii) Agent-specific deontic modality and trigger span detection. This task aims at identifying both the deontic type and corresponding triggers. We pose this as a token classification task. Every token in the corpus is assigned a BIOS tag if it belongs to a modal trigger, which is appended with a suffix indicating its deontic type. For instance, Tenant $O$ is $B - OBL$ responsible $I - OBL$ for $I - OBL$ paying $O$ the $O$ rent $O$ , where subscripts denote the BIOS tags. + +For both the tasks, agent is conditioned using special tokens added at the beginning of a sentence. This simple strategy has been successfully used previously for controlled text generation tasks (Sennrich et al., 2016; Johnson et al., 2017; Rudinger et al., 2020; Sancheti et al., 2022). + +# 5 Benchmarking Setup + +We experiment with various pre-trained language models (PLMs) (Devlin et al., 2019; Liu et al., 2019), which have shown state-of-the-art performance on natural language understanding tasks, to study their performance on our proposed tasks. We fine-tune these models for both the tasks on binary cross-entropy loss for 20 epochs each with a batch size of 8, and maximum sequence length of 256 using HuggingFace's Transformers library (Wolf et al., 2020). The model(s) with the best macro-F1 score on the dev set is used to report results on the test set. Further implementation details are in A.6. We also partition the data according to the agent being conditioned to assess the performance of the trained models with respect to each agent. + +# 6 Benchmarking Multi-label Classification + +Comparison models. We experiment with three kinds of approaches for the agent-specific multi-label deontic modality classification task. + +(1) Majority class predicted for each agent. +(2) Rule-based. We implement a rule-based approach similar to the one described in Ash et al. (2020) with additional conditioning on the agent. It searches for the presence of pre-defined modal triggers for a deontic type and associates it with the agent using dependency tags (e.g., nsubj, aux or agent). We use spacy to tokenize each sentence and obtain a dependency parse. More details in A.5. +(3) Fine-tuning PLMs. We fine-tune the following PLMs differing in size and domain of data used for pre-training: (i) BERT-base-uncased (BERT-BU); (ii) RoBERTa-base (RoBERTa-B); (iii) RoBERTa-large (RoBERTa-L), and (iv) recently introduced Contract-BERT-base-uncased (C-BERT-BU) model (Chalkidis et al., 2020) which has been pre-trained on US contracts from the EDGAR library. + +All the above models are trained assuming trigger span information is not available and full context (i.e., sentence) is used. To understand the importance of Agent conditioning, Context, and Trigger for this task, we additionally train the following models: (i) No-agent where special token for agent is not used during training; (ii) ACT-Masked wherein everything in the context except the trigger span is masked using [MASK] token to hide the context but retain the positional information of the trigger; (iii) AT wherein only the tokens belonging to a trigger are used and multiple triggers are separated using a special token (e.g., [SEP] or $< / s>$ ), and (iv) ACT wherein all the triggers are appended at the end of the context separated by a special token (e.g., [SEP] or $< / s>$ ). + +Evaluation measures. We report macro-averaged Precision, Recall, and F1 scores across all the types, calculated using Sklearn library (Pedregosa et al., 2011). We also report the Accuracy of predicting all the classes correctly for a sentence. + +Results and analysis. We report the results for multi-label classification task in Table 4. While rule-based approach has better F1 score than majority type prediction for each agent, Transformer-based models outperform these baselines indicating their ability to better capture the linguistic diversity of expressing deontic modals. As expected, Rule + +
ModelAccuracyPrecisionRecallF1
Majority39.53/28.66/34.386.49/5.23/11.7214.29/14.29/21.098.92/7.66/15.03
Rule-based61.32/50.54/56.2281.81/75.21/80.0446.66/45.54/46.6450.13/46.16/48.76
BERT-BU74.07/70.79/72.5273.68/74.44/75.4875.84/71.02/77.1778.81/71.18/75.61
RoBERTa-B75.53/71.42/73.5973.54/72.17/74.4878.39/72.88/78.3174.90/71.91/75.66
C-BERT-BU77.50/73.25/75.4876.63/76.22/77.5280.47/71.54/78.8177.95/72.34/77.67
RoBERTa-L78.28/75.03/76.7475.05/77.69/77.3079.59/75.21/79.1176.71/76.00/77.88
RoBERTa-L-No-agent51.28/47.45/49.4657.09/53.79/58.3265.01/55.52/60.0855.75/52.56/57.53
RoBERTa-L-ACT-Masked81.52/72.02/77.0276.39/71.31/81.4671.22/65.74/75.9084.25/76.29/80.42
RoBERTa-L-AT84.72/76.22/80.7079.84/73.60/82.2978.00/72.96/82.4787.58/80.58/84.20
RoBERTa-L-ACT91.66/88.62/90.2388.44/85.40/89.4888.10/84.43/89.2193.38/91.38/92.42
+ +Table 4: Evaluation results for agent-specific multi-label deontic modality classification task. Scores for Tenant/Landlord/Both are averaged over 3 different seeds. BU, B, L, A, C, and T denote base-uncased, base, large, agent, context (sentence), and trigger, respectively. Development set results are provided in Table 9 in Appendix. + +based approach has the highest overall precision but low recall due to the impossibility of enumerating all the rules. While C-BERT-BU, which is pretrained on contracts, performs better than BERT-BU and RoBERTa-B, interestingly it achieves comparable F1 score to RoBERTa-L. This indicates that improvements from domain-specific pre-training may also be achieved with larger model size and more training data. + +As RoBERTa-L performs the best on this task, we report the results for variants of this model to understand the importance of agent conditioning, context, and trigger, in the last block of Table 4. The performance of RoBERTa-L-No-agent, trained without agent conditioning, significantly drops as compared to RoBERTa-L, indicating the importance of agent conditioning during training and association of agent with the modality expressed in a sentence. Using trigger information during training (RoBERTa-L-ACT) significantly improves the performance over RoBERTa-L across all the measures, showing that triggers are indicative of specific deontic type. Higher scores for RoBERTa-L-AT than RoBERTa-L-ACT-Masked show that positional information of trigger span adds noise to the representations learned by the model. Further, context is also important for identifying deontic type, as all the metric scores drop when context is masked (RoBERTa-L-ACT-Masked) or not used (RoBERTa-L-AT) during training as compared to using all the information (RoBERTa-L-ACT). + +Manual inspection of deontic type-wise (Table 11) performance reveals that permission is the easiest, while no-entitlement and prohibition are the hardest to identify. This can be due to the use of limited variety in expressing permissions (majorly 'may'), while use of negation within context for expressing prohibitions which makes it harder to + +identify. For tenant, obligation is identified more accurately than entitlements (vice-versa for landlord) as expected due to higher frequency of obligations for tenant and entitlements for landlord. + +Figure 4a shows the trend for RoBERTA-L's F1 score as train data size varies indicating that the rate of increase in F1 decreases with additional data. + +# 7 Benchmarking Trigger Span Detection + +Comparison models. We experiment with three kinds of approaches for the agent-specific deontic modality and trigger span detection task. + +(1) Majority. 'Shall' is the most used trigger as shown in Figure 3 and is used to express obligations for Tenant while entitlements for Landlord. This baseline tags each occurrence of 'shall' with S- OBL for tenant or S-ENT for landlord as agent. +(2) Rule-based. We tag occurrences of pre-defined modal triggers in a sentence with the deontic type predicted using the rule-based approach (§6). +(3) Fine-tuning PLMs. We fine-tune the same models as described in §6 on a token classification task to predict the BIOS tags. Additionally, we train a 'No-agent' model to verify the importance of agent conditioning. + +Evaluation measures. We report macro-averaged Precision, Recall, and F1 scores, calculated using sequeval library (Nakayama, 2018). We also report the Accuracy of predicting the BIOS tags for a sentence. Following (Pyatkin et al., 2021), we report these metrics in labeled (both deontic type and trigger span considered) and unlabeled (only trigger span without deontic type is considered) settings. + +Results and Analysis. Labeled and unlabeled metric scores for trigger span detection task are reported in Table 5. RoBERTa-L has the best labeled F1 score which evaluates for both trigger detection and its correct deontic type identification. Similar + +
ModelLabeledUnlabeled
AccuracyPrecisionRecallF1AccuracyPrecisionRecallF1
Majority97.16/96.85/97.015.58/4.40/9.9810.89/10.31/16.007.38/6.17/12.2797.28/97.04/97.1741.30/39.59/40.5150.61/42.86/46.7645.48/41.16/43.41
Rule-based97.85/97.67/97.7677.42/76.68/79.6632.42/33.24/33.6140.00/39.30/40.5897.89/97.73/97.8172.59/73.97/73.2240.07/35.34/37.7351.64/47.83/49.80
BERT-BU98.45/98.36/98.4153.04/56.11/56.4858.49/59.05/61.9755.11/57.01/58.8098.55/98.52/98.5368.87/69.92/69.3876.07/75.22/75.6572.29/72.46/72.38
RoBERTa-B98.40/98.24/98.3253.03/52.43/55.5763.65/59.63/64.0057.08/55.31/53.9198.49/98.41/98.4668.99/67.44/68.2278.18/75.36/76.7873.27/71.14/72.22
C-BERT-BU98.44/98.39/98.4253.46/54.70/57.0860.76/57.37/62.4256.45/55.68/59.3198.52/98.52/98.5269.49/70.85/70.1576.89/74.26/75.5972.99/72.52/72.76
RoBERTa-L98.45/98.27/98.3754.99/55.58/57.3765.55/58.88/63.7459.19/56.71/60.0498.54/98.39/98.4769.78/69.56/69.6879.16/74.35/76.7874.18/71.88/73.06
RoBERTa-L-NA97.64/97.75/97.6932.45/36.71/36.3648.92/43.93/46.7934.68/38.72/39.4598.26/98.22/98.2461.73/64.63/63.1275.21/72.84/74.0367.79/68.46/68.11
+ +Table 5: Evaluation results for agent-specific modal trigger span detection task. Macro-averaged Precision, Recall and F1 scores are presented for Tenant/Landlord/Both. Scores are averaged over 3 different seeds. BU, B, L, and NA denote base-uncased, base, large, and no-agent respectively. Dev set results are shown in Table 10 in Appendix. + +to the classification task, Rule-based approach outperforms other models on precision however, lags behind in recall for the same reason. Size of the model (RoBERta-L) is instrumental than domain knowledge of C-BERT-BU. Consistently higher unlabeled scores, indicate that models are able to identify the trigger words. However, associating triggers with the correct deontic type is a harder task, owing to the multiple deontic types that a trigger can be used to express (e.g., shall in Table 3) them. Similar to the classification task, importance of agent conditioning is evident from the last row with significant drop in F1 scores (even lower than Rule-based approach in Labeled score). Higher accuracy scores are due to the majority tokens being labeled as 'O'. Trends with dataset size variation are shown in Figure 4b. Manual analysis of deontic type-wise span detection (Table 11) reveals that prohibition, no-entitlement, and no-obligation are hard to identify. Similar trends were observed for tenant and landlord as in §6. + +These results show that identification of both triggers and associating it with the deontic type is a difficult task owing to the linguistic variety of expressions used in legal language. + +# 8 Beyond Lease Contracts + +To investigate if the diverse linguistic expressions used for expressing deontic modality is specific to a contract type, we collect annotations via AMT using the same annotation protocol (§3.3) for: (1) 470 sentences from 3 employment contracts in the LEDGAR corpus, and (2) 154 sentences from 4 rental agreement templates freely available at PandaDoc. $^{13}$ We evaluate the performance of the best model (RoBERTa-L) for both the tasks on these sentences and report the results in Table 6. We observe a performance drop (more for employment + +![](images/03db8e6a9da5c3137daf9955838492450b8a45e883a1c3bebfe1ad2b558e5a09.jpg) +(a) Classification + +![](images/3d61d40f4611126a9276904dd37a65118a7f25af733c30caae599e74c5ad49cb.jpg) +(b) Trigger Span Detection +Figure 4: RoBERTA-L's performance with varying train dataset size for the two tasks. + +contracts than rental agreements) when compared to model's performance on lease agreements, although it is significantly better than the rule-based approach demonstrating the non-robustness of rule-based approach towards diverse linguistic expressions. This drop is more prominent for employment contracts due to the lease-specific agent conditioning (e.g., tenant) used during training while commonly occurring agents in employment contracts are employee, employer, etc. + +To account for this, we additionally train models with anonymizing the agent mentions in the dataset in two ways: (i) RoBERTa-L-AR- all occurrences of an agent are replaced with the same token (e.g., + +
ModelAccuracyPrecisionRecallF1
Multi-label Classification (Rental/Employment)
Majority36.36/27.4511.87/8.8019.10/15.1514.46/11.11
Rule-based41.56/47.4553.77/64.6334.54/35.0033.27/37.22
RoBERTa-L73.16/48.7283.08/52.8763.42/48.9068.90/48.32
RoBERTa-L-AR55.19/42.5556.87/59.2952.38/46.4850.66/50.30
RoBERTa-L-ARR70.35/64.6876.79/70.0563.14/64.6265.89/65.36
Trigger Span Detection (Labeled) (Rental/Employment)
Majority96.09/97.3718.33/4.231.90/7.083.42/5.30
Rule-based96.40/97.8356.25/59.6623.69/19.6529.62/27.45
RoBERTa-L97.48/97.7849.74/36.8045.87/37.8445.58/34.87
RoBERTa-L-AR97.22/98.1549.97/48.8644.43/42.9944.22/43.42
RoBERTa-L-ARR97.60/98.3859.42/53.1447.83/43.8449.61/45.47
+ +Table 6: Results for rental/employment contracts.14 + +‘a1’ for tenant), and (ii) RoBERTa-L-ARR –agent is randomly replaced with a token consistent within a sentence. Replacing agent mentions leads to significant improvements for employment contracts in both the tasks, although evaluating these models on rental agreements (see Table 6) and lease data shows (see Table 12) an expected drop in the performance. These experiments show that the linguistic expressions captured by LEXDEMOD are also generalizable to other types of contracts. + +# 9 Case Study: Red flag Detection + +To investigate if our agent-specific deontic modality classifier is capable of identifying the red flags annotated by Leivaditi et al. (2020) for lease agreements, we compare the predictions on the dev set from ALeaseBERT, proposed by Leivaditi et al. (2020), and RoBERTa-L model trained on LEXDEMOD dataset. For each sentence in the red flags dataset, we predict the deontic modality with respect to each of the agent alias mentioned in that sentence. If any one of the deontic types is expressed for any of the agents then we consider the prediction as positive otherwise negative. We find that (see Table 14 in A.8) the model trained on LEXDEMOD has high recall and low precision while ALeaseBERT has high precision but low recall for the positive class. Our model was able to predict all the red flags predicted by ALeaseBERT and some additional red flags. This is expected as many permissions or entitlements may not be red flags but may belong to a deontic type. We also found that there were payments related obligations which were predicted as red flags by our model but were not annotated as red flags in the dataset. Therefore, our model could also be used to filter important sentences which could indicate some red flags due to high recall. + +# 10 Conclusion and Future Work + +We introduced LEXDEMOD for deontic modality detection in the legal domain which consists of diverse linguistic expressions of deontic modality. We propose and benchmark two tasks namely, agent-specific multi-label deontic modality classification, and agent-specific deontic modality and trigger span detection using transformer-based models. While evaluation results are promising, there is substantial room for improvement. We demonstrated the generalizability of the diverse linguistic expressions captured in LEXDEMOD via transfer learning experiments to employment and rental lease agreements. The small case study on red flag detection using our data showed the usability of our dataset. We leave joint-modeling of the two tasks and using these identification models for generating "at a glance" summary of contracts for future work. + +# 11 Limitations + +We note a few limitations: (1) Although we demonstrate reasonable generalization to employment agreements, our dataset is limited to lease agreements which may not cover all the linguistic expressions for deontic modality occurring in legal domain. (2) The custom interface built for collecting annotations does not support non-contiguous trigger-span selection which may result in some contract type specific triggers (only for triggers with negation). Future work may consider handling non-contiguous spans and other challenges associated with it (e.g., representing non-contiguous trigger spans for a category in the BIO span). (3) As we focus on identifying agent-specific deontic modalities, we only consider sentences where the agent alias is explicitly mentioned. This helped in simplifying the annotation process and cost efficiency. Therefore, our models may not work well when no agent alias is mentioned in the given sentence. We leave the collection of annotations for sentences not explicitly mentioning agent alias for future work. (4) Our data collection and modeling assume that agent alias is known apriori (for which we perform agent alias extraction) as we focus on the identification task. Extending this work to any other type of agreement will require similar alias extraction method (e.g., employee, employer for employment agreement) or a more sophisticated model to identify the agent implicitly. + +# 12 Ethical Considerations + +We are committed to ethical practices and protecting the anonymity and privacy of individuals who have contributed. We ensure that the privacy of the annotators is protected. For annotations, $7.5/hr was paid per task. + +Societal Impact. We recognize and acknowledge that our work carries a possibility of misuse including malicious adulteration of summaries generated by extracting sentences identified by our model and adversarial use of the model to mislead users. Such kind of misuse is common to any prediction model therefore, we strongly recommend coupling any such technology with external expert validation. The purpose of this work is to provide aid to the legal personnel or layperson dealing with legal contracts for a better understanding of the legal documents, and not to replace any experts. As contracts are long documents, identification of sentences that express deontic types can help in significantly reducing the number of sentences to read or highlighting the important parts of the contract which may need more attention. + +# 13 Acknowledgements + +We would like to thank Ani Nenkova and the anonymous reviewers for their useful feedback and comments on this work. We acknowledge the support from Adobe Research unrestricted gift funding for this work. The views contained in this article are those of the authors and not of the funding agency. + +# References + +Nikolaos Aletras, Dimitrios Tsarapatsanis, Daniel Preotjuc-Pietro, and Vasileios Lampos. 2016. Predicting judicial decisions of the european court of human rights: A natural language processing perspective. PeerJ Computer Science, 2:e93. +Iosif Angelidis, Ilias Chalkidis, and Manolis Koubarakis. 2018. Named entity recognition, linking and generation for greek legislation. In JURIX, pages 1-10. +Elliott Ash, Jeff Jacobs, Bentley MacLeod, Suresh Naidu, and Dominik Stammbach. 2020. Unsupervised extraction of workplace rights and duties from collective bargaining agreements. In 2020 International Conference on Data Mining Workshops (ICDMW), pages 766-774. IEEE. +Tara Athan, Harold Boley, Guido Governori, Monica Palmirani, Adrian Paschke, and Adam Wyner. 2013. Oasis legalruleml. In proceedings of the fourteenth international conference on artificial intelligence and law, pages 3-12. + +Luciana Beatrix Avila, Amália Mendes, and Iris Hendrickx. 2015. Towards a unified approach to modality annotation in portuguese. In Proceedings of the Workshop on Models for Modality Annotation. +Miguel Ballesteros, Rishita Anubhai, Shuai Wang, Nima Pourdamghani, Yogarshi Vyas, Jie Ma, Parminder Bhatia, Kathleen McKeown, and Yaser Al-Onaizan. 2020. Severing the edge between before and after: Neural architectures for temporal ordering of events. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5412-5417, Online. Association for Computational Linguistics. +Rachelle Ballesteros-Lintao, Maria Regina P Arriero, Judith Ma Angelica S Claustro, Kristina Isabelle U Dichoso, Selenne Anne S Leynes, Maria Rosario R Aranda, and Jean Reintegrado-Celino. 2016. Deontic meanings in philippine contracts. International Journal of Legal Discourse, 1(2):421-454. +Paheli Bhattacharya, Kaustubh Hiware, Subham Rajgaria, Nilay Pochhi, Kripabandhu Ghosh, and Saptarshi Ghosh. 2019. A comparative study of summarization algorithms applied to legal case judgments. In European Conference on Information Retrieval, pages 413-428. Springer. +Michael J Bommarito II, Daniel Martin Katz, and Eric M Detterman. 2021. Lexnlp: Natural language processing and information extraction for legal and regulatory texts. In Research Handbook on Big Data Law. Edward Elgar Publishing. +David Bracewell, David Hinote, and Sean Monahan. 2014. The author perspective model for classifying deontic modality in events. In *The Twenty-Seventh International Flairs Conference*. +Cristian Cardellino, Milagro Teruel, Laura Alonso Alemany, and Serena Villata. 2017. Legal NERC with ontologies, Wikipedia and curriculum learning. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 254-259, Valencia, Spain. Association for Computational Linguistics. +Ilias Chalkidis, Ion Androutsopoulos, and Nikolaos Aletras. 2019. Neural legal judgment prediction in English. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4317-4323, Florence, Italy. Association for Computational Linguistics. +Ilias Chalkidis, Ion Androutsopoulos, and Achilleas Michos. 2017. Extracting contract elements. In Proceedings of the 16th edition of the International Conference on Artificial Intelligence and Law, pages 19-28. +Ilias Chalkidis, Ion Androutsopoulos, and Achilleas Michos. 2018. Obligation and prohibition extraction using hierarchical RNNs. In Proceedings of the 56th + +Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 254-259, Melbourne, Australia. Association for Computational Linguistics. +Ilias Chalkidis, Manos Fergadiotis, Prodromos Malakasiotis, Nikolaos Aletas, and Ion Androutsopoulos. 2020. LEGAL-BERT: The muppets straight out of law school. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 2898-2904, Online. Association for Computational Linguistics. +Huajie Chen, Deng Cai, Wei Dai, Zehui Dai, and Yadong Ding. 2019. Charge-based prison term prediction with deep gating network. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6362-6367, Hong Kong, China. Association for Computational Linguistics. +Sandra Chung. 1985. Tense, aspect and mood. Language typology and syntactic description, pages 202-258. +G Marcus Cole. 2015. Rational consumer ignorance: When and why consumers should agree to form contracts without even reading them. *JL Econ.* & Pol'y, 11:413. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. +Mauro Dragoni, Serena Villata, Williams Rizzi, and Guido Governori. 2016. Combining nlp approaches for rule extraction from legal documents. In 1st Workshop on MIning and REasoning with Legal texts (MIREL 2016). +Xingyi Duan, Baoxin Wang, Ziyue Wang, Wentao Ma, Yiming Cui, Dayong Wu, Shijin Wang, Ting Liu, Tianxiang Huo, Zhen Hu, et al. 2019. Cjrc: A reliable human-annotated benchmark dataset for chinese judicial reading comprehension. In China National Conference on Chinese Computational Linguistics, pages 439-451. Springer. +Ruka Funaki, Yusuke Nagata, Kohei Suenaga, and Shin-suke Mori. 2020. A contract corpus for recognizing rights and obligations. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 2045-2053, Marseille, France. European Language Resources Association. +Ben Hachey and Claire Grover. 2006. Extractive summarisation of legal texts. Artificial Intelligence and Law, 14(4):305-345. + +Iris Hendrickx, Amália Mendes, and Silvia Mencarelli. 2012. Modality in text: a proposal for corpus annotation. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12), pages 1805-1812, Istanbul, Turkey. European Language Resources Association (ELRA). +Dan Hendrycks, Collin Burns, Anya Chen, and Spencer Ball. 2021. Cuad: An expert-annotated nlp dataset for legal contract review. arXiv preprint arXiv:2103.06268. +Matthew Honnibal, Ines Montani, Sofie Van Landeghem, and Adriane Boyd. 2020. spacy: Industrial-strength natural language processing in python. +Otto Jespersen. 2013. The philosophy of grammar. Routledge. +Melvin Johnson, Mike Schuster, Quoc V Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Viégas, Martin Wattenberg, Greg Corrado, et al. 2017. Google's multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5:339-351. +Klaus Krippendorff. 2018. Content analysis: An introduction to its methodology. Sage publications. +Spyretta Leivaditi, Julien Rossi, and Evangelos Kanoulas. 2020. A benchmark for lease contract review. arXiv preprint arXiv:2010.10386. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. +Bingfeng Luo, Yansong Feng, Jianbo Xu, Xiang Zhang, and Dongyan Zhao. 2017. Learning to predict charges for criminal cases with legal basis. arXiv preprint arXiv:1707.09168. +Laura Manor and Junyi Jessy Li. 2019. Plain English summarization of contracts. In Proceedings of the Natural Legal Language Processing Workshop 2019, pages 1-11, Minneapolis, Minnesota. Association for Computational Linguistics. +Aleksandra Matulewska. 2010. Deontic modality and modals in the language of contracts. +Hiroki Nakayama. 2018. seqeval: A python framework for sequence labeling evaluation. Software available from https://github.com/chakki-works/seqeval. +Adeline Nazarenko, François Levy, and Adam Wyner. 2018. An annotation language for semantic search of legal sources. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA). + +James O' Neill, Paul Buitelaar, Cecile Robin, and Leona O' Brien. 2017. Classifying sentential modality in legal language: a use case in financial regulations, acts and directives. In Proceedings of the 16th edition of the International Conference on Artificial Intelligence and Law, pages 159-168. +Jonathan A Obar and Anne Oeldorf-Hirsch. 2020. The biggest lie on the internet: Ignoring the privacy policies and terms of service policies of social networking services. Information, Communication & Society, 23(1):128-147. +Frank Robert Palmer. 2001. Mood and modality. Cambridge university press. +Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. 2011. Scikit-learn: Machine learning in python. Journal of machine learning research, 12(Oct):2825-2830. +Wim Peters and Adam Wyner. 2016. Legal text interpretation: Identifying hohfeldian relations from text. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 379-384, Porto-rož, Slovenia. European Language Resources Association (ELRA). +Valentina Pyatkin, Shoval Sadde, Aynat Rubinstein, Paul Portner, and Reut Tsarfaty. 2021. The possible, the plausible, and the desirable: Event-based modality detection for language processing. arXiv preprint arXiv:2106.08037. +Paulo Quaresma, Amália Mendes, Iris Hendrickx, and Teresa Gonçalves. 2014. Automatic tagging of modality: identifying triggers and modal values. In Proceedings 10th Joint ISO-ACL SIGSEM Workshop on Interoperable Semantic Annotation, pages 95–101. European Language Resources Association. +Aynat Rubinstein, Hillary Harner, Elizabeth Krawczyk, Dan Simonson, Graham Katz, and Paul Portner. 2013. Toward fine-grained annotation of modality in text. In Proceedings of the IWCS 2013 workshop on annotation of modal meanings in natural language (WAMM), pages 38-46. +Rachel Rudinger, Vered Shwartz, Jena D. Hwang, Chandra Bhagavatula, Maxwell Forbes, Ronan Le Bras, Noah A. Smith, and Yejin Choi. 2020. Thinking like a skeptic: Defeasible inference in natural language. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4661-4675, Online. Association for Computational Linguistics. +Abhilasha Sancheti, Balaji Vasan Srinivasan, and Rachel Rudinger. 2022. Entailment relation aware paraphrase generation. Proceedings of the AAAI Conference on Artificial Intelligence, 36(10). +Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Controlling politeness in neural machine translation via side constraints. In Proceedings of the + +2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 35-40. +Don Tuggener, Pius von Däniken, Thomas Peetz, and Mark Cieliebak. 2020. LEDGAR: A large-scale multi-label corpus for text classification of legal provisions in contracts. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 1235-1241, Marseille, France. European Language Resources Association. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30. +Georg Henrik Von Wright. 1951. Deontic logic. Mind, 60(237):1-15. +Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics. +Adam Wyner and Wim Peters. 2011. On rule extraction from regulations. In *Legal Knowledge and Information Systems*, pages 113-122. IOS Press. +Haoxi Zhong, Zhipeng Guo, Cunchao Tu, Chaojun Xiao, Zhiyuan Liu, and Maosong Sun. 2018. Legal judgment prediction via topological learning. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3540-3549. +Haoxi Zhong, Chaojun Xiao, Cunchao Tu, Tianyang Zhang, Zhiyuan Liu, and Maosong Sun. 2020. Jecqa: A legal-domain question answering dataset. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 9701-9708. + +# A Appendix + +# A.1 Combining Bullets + +We combine the higher level context (bullets-"parent") with the lower level context (subbullet- "child") owing to the hierarchical nature of contracts by iterating over the provisions in a contract in sequential order and following the below rules. Combination can be done in two ways: (i) concatenating, and (ii) merging. We find a sub-bullet via regular expression $(\hat{\mathbf{\Pi}}\backslash ([\mathrm{ivx}] + |\hat{\mathbf{\Pi}}\backslash ([a - zA - Z] + |\hat{\mathbf{\Pi}}[\backslash d.\backslash d] + )$ pattern matching. + +- If the child is not a complete sentence (identified by the presence of S in root of constituency parse), parent is a complete sentence, and parent does not contain 'follow' or 'below': then remove ':' from the end of parent and append the child (we call this, merging). +- If child starts with a lower case and parent does not contain 'follow' or 'below': then remove ':' and append child irrespective of the root label of constituency. +- If parent ends with 'the following:' then remove 'the following:' and append the child if it is not a complete sentence else do not remove 'the following:' and just append the child (we call this, concatenating). +- If none of the above rule satisfies and the parent ends with a ‘:’ then just concatenate the child with the parent. + +# A.2 Annotation Guidelines + +We present the instructions, and the correctly and incorrectly annotated examples with explanations provided to the annotators in Figure 5. The custom annotation inference built to collect the data is shown in Figure 6. We manually annotate 50 sentences and use them as quality check questions to ensure annotators are sincerely and correctly annotating each HIT. Type-wise inter-annotator agreement for the sentences in test split is shown in Table 7. + +# A.3 Qualification Questions + +We ask 10 multiple choice questions in the prequalification task consisting of 5 questions to test the understanding of identifying the correct deontic type and 5 questions to test their understanding of trigger span selection for a deontic type. + +
OblEntProPerNoblNentNone
0.820.680.440.820.760.410.65
+ +Table 7: Deontic type-wise inter-annotator agreement $(\alpha)$ for the test set. + +
TypeHeuristic triggers
Oblshall/will be required, shall be obligated, shall, must, will, have to, should, ought to have, will/shall be paid
Entshall/will be entitled, shall/will be paid, shall/will retain, shall/will receive, shall have the right to, shall be retained, shall be kept, shall be claimed, shall be accessible, shall be owned, shall be determined
Proshall/will/must/may not, cannot, shall have no right, can not, shall/will not be allowed, shall not assist, shall/will be prohibited
Pershall be permitted, shall also be permitted, can, may, could, shall/will be allowed
Nobelshall/will not be liable for, shall/will not be obli-gated to, shall/will not be obligated for, shall/will not be responsible for, shall/will not be required to
Nentshall/will not entitled to, shall/will not have the right to, shall/will not be entitled for
+ +Table 8: Triggers used to identify the deontic types. + +# A.4 Resolving Disagreements + +Disagreement in the annotation for duplicate sentences is resolved by one of the authors. The disagreement could occur because of any missing modality in case of multiple modalities expressed in a sentence, incorrect interpretation of the sentence, or human error in terms of annotating with respect to a tenant or a landlord. Consider the below sentence: "[landlord] After final approval of the Final Plans by applicable governmental authorities, no further changes may be made thereto without the prior written approval of both Landlord and Tenant.", it was annotated as 'prohibition' for landlord by one of the annotators and 'none' by another annotator. As the prohibition mentioned in the sentence is not for the landlord, the correct annotation is 'none'. Therefore, we retain the correct annotation for the example and discard the sentence with the incorrect annotation. Another example is "[landlord] All conditions and agreements under the Lease to be satisfied or performed by Landlord have been satisfied and performed." which was incorrectly annotated as an 'obligation'. + +# A.5 Rule-based Approach + +We first curate a pre-defined list of triggers (Table 8) used to express deontic types in legal domain following Ash et al. (2020). Then, tokenize and obtain + +
ModelAccuracyPrecisionRecallF1
Majority47.87/26.06/38.487.60/4.83/12.4314.29/14.29/20.299.92/7.22/15.26
Rule-based52.13/42.25/47.8863.81/65.28/75.5448.08/32.69/40.0747.42/34.63/42.75
BERT-BU75.71/67.61/72.2273.80/68.20/72.5076.14/61.52/72.5674.16/62.42/72.13
RoBERTa-B72.69/69.48/71.3173.94/74.61/73.7176.06/74.94/76.5073.65/73.86/74.36
RoBERTa-L77.13/67.37/72.9376.93/68.95/74.2978.01/69.14/75.7376.79/67.76/74.32
C-BERT-BU76.60/68.31/73.0378.17/69.27/75.5281.19/68.62/77.4379.14/67.49/75.92
+ +Table 9: Evaluation results for agent-specific multi-label deontic modality classification task on development set. Scores are averaged over 3 different seeds. BU, B, and L denote base-uncased, base, and large respectively. + +
ModelLabeledUnlabeled
AccuracyPrecisionRecallF1AccuracyPrecisionRecallF1
Majority97.26/96.93/97.117.23/4.73/11.9610.43/10.42/15.528.54/6.50/13.3697.41/97.20/97.3252.91/46.81/50.3051.81/44.00/48.4052.36/45.36/49.33
Rule-based97.72/97.57/97.6558.27/58.99/72.4335.61/17.53/25.8340.13/25.52/34.3697.76/97.62/97.7080.22/66.67/75.3537.82/22.67/31.2051.41/33.83/44.12
BERT-BU98.23/98.03/98.1457.07/45.78/54.7557.81/45.74/55.2356.91/44.18/54.1498.41/98.29/98.3672.70/67.68/70.4772.37/67.33/70.1772.49/67.50/70.30
C-BERT-BU98.18/97.95/98.0854.09/60.30/56.9655.31/53.18/57.0154.12/52.86/55.7398.35/98.20/98.2969.58/68.08/68.9070.99/68.67/69.9770.26/68.37/69.43
RoBERTa-B98.12/97.83/97.9955.33/48.38/53.6358.95/52.51/57.7856.73/49.80/55.3198.28/98.06/98.1870.30/67.56/69.0873.40/71.78/72.6971.80/69.56/70.81
RoBERTa-L98.09/97.83/97.9756.81/47.91/54.8860.63/51.78/59.1057.69/49.29/56.3398.23/98.05/98.1570.67/66.11/68.6473.58/69.56/71.8272.09/67.78/70.19
RoBERTa-L-NA97.57/97.76/97.6535.59/44.35/40.6046.54/48.02/48.6736.81/45.7/43.1698.27/98.19/98.2369.75/69.00/69.4374.61/72.89/73.8672.08/70.86/71.55
+ +Table 10: Evaluation results for agent-specific modal trigger span detection task on development set. Macro-averaged scores for Tenant/Landlord/All are presented for precision, recall and F1 measures. Scores are averaged over 3 different seeds. BU, B, and L denote base-uncased, base, and large respectively. + +
Deontic TypeClassificationSpan Detection
PrecisionRecallF1PrecisionRecallF1
Obl84.8787.8386.3276.9380.8078.82
Ent79.2085.6582.3066.9677.8972.02
Pro60.7675.0067.1348.9468.6657.14
Per91.2587.4389.3090.7982.6386.52
Nobl74.5887.1380.3731.8838.9435.06
Nent67.9560.2363.8629.7332.6731.13
None82.6073.1077.56---
+ +the dependency parse and part of speech (POS) tags each each token in a sentence using spaCy python library. We describe the heuristic algorithm (by observing patterns in the train set) which searches for the presence of pre-defined triggers in a given sentence to extract its position (start index), each of the agents' mention, and its dependency tag for a sentence in Algorithm 1. + +# A.6 Implementation Details + +We run each model on 3 seed values. We use Adam optimizer with a linear scheduler for learning rate with an initial learning rate of $2e^{-5}$ , and warm-up ratio set at 0.05. All the models are trained and tested on NVIDIA Tesla V100 SXM2 16GB GPU machine. We experiment with batch size $\in \{2,4,8\}$ , number of epochs $\in \{3,5,10,20,30\}$ , learning rate $\in \{1e^{-5},2e^{-5},3e^{-5},5e^{-5}\}$ , and warm-up ratio $\in \{0.05,0.10\}$ . BERT-base (110M parameters) and Roberta-base (125M parameters) + +Table 11: Deontic type-wise results for agent-specific multi-label classification and modal trigger span detection (labeled) task on test set from the best (out of 3 seeds) RoBERTa-L model. + +
ModelAccuracyPrecisionRecallF1
Multi-label Classification
RoBERTa-L76.7477.3079.1177.88
RoBERTa-L-AR47.9156.9959.9156.60
RoBERTa-L-ARR66.1869.7574.1471.22
Trigger Span Detection (Labeled)
RoBERTa-L98.3757.3763.7460.04
RoBERTa-L-AR98.1250.1159.5453.63
RoBERTa-L-ARR98.4258.4264.8861.19
Trigger Span Detection (Unlabeled)
RoBERTa-L98.4769.6876.7873.06
RoBERTa-L-AR98.3667.0375.7671.07
RoBERTa-L-ARR98.5470.6576.7673.58
+ +Table 12: Evaluating RoBERTa-L-AR and RoBERTa-L-ARR on lease test set. + +
ModelAccuracyPrecisionRecallF1
Majority96.09/97.5354.55/41.264.55/34.168.39/37.38
Rule-based96.40/97.8584.62/73.2816.67/21.7227.85/33.51
RoBERTa-L97.78/98.0969.85/54.3268.44/60.7669.14/57.36
RoBERTa-L-AR97.81/98.3168.77/59.6469.44/57.0569.09/58.30
RoBERTa-L-ARR97.78/98.4669.51/61.8170.45/57.5069.94/59.58
+ +Table 13: Unlabeled Metric scores for trigger span detection task on rental/employment contracts. + +models took 46mins, and RoBERTa-large (355M parameters) took 2hrs to train for each of the tasks. + +# A.7 Additional Results + +Table 9, and 10 shows the results for the two tasks on the dev set. Table 11 shows the type-wise results from the best performing model. Table 12 shows the performance of models trained with anonymized agent on the test set of lease contracts. + +Algorithm 1 Rule-based Heuristic +1: Inputs: List $T$ of pre-defined triggers, List $A$ of aliases for the type of contract to process. +2: Outputs: List $L$ of tuples containing (Deontic type, trigger, agent, start index) for all the sentence in the contract. +3: $L \gets [], I \gets []$ // Initialization +4: for each sentence in contract do +5: // Initialize a list to keep account of visited trigger indices +6: visited $\leftarrow []$ +7: for each $t$ in $T$ do +8: if $t$ in sentence then +9: // Initialize a list of trigger indices +10: indices $\leftarrow []$ +11: for each $t$ in sentence do +12: if start index of $t \notin$ visited then +13: indices $\leftarrow$ start index +14: visited $\leftarrow$ start index +15: end if +16: end for +17: for word in sentence do +18: if worddependency is ROOT or word(pos $\in$ [VERB, AUX] then +19: for child in word.children do // Iterate over the children of word in the dependency tree +20: If $a1 \in A$ is 'nsubj/nsubjpass' of word & child==t[0] & childdependency is 'aux' & child.index in indices then $L \leftarrow (Type(t), t, a1, child.index)$ // Rule 1 +21: If Rule 1 & $a2 \in A$ is a 'conj' of $a1$ then $L \leftarrow (Type(t), t, a2, child.index)$ // Rule 2 +22: If child1dependency is 'agent' & child2==t[0] & child2dependency is 'aux' & $a1 \in A$ in children(child1)=child3 then $L \leftarrow (Type(t), t, a1, child2.index)$ // Rule 3 +23: If Rule 3 & $a2 \in A$ in conjunction of child3 & VERB in conjunction of word & t1 is 'aux' of VERB then $L \leftarrow (Type(t1), t1, a2, t1.index)$ // Rule 4 +24: If Rule 3 & not Rule 4 & $a2 \in A$ in conjunction of child3 & child==t[0] & childdependency is 'aux' & child.index in indices then $L \leftarrow (Type(t), t, a2, child.index)$ // Rule 5 +25: If childdependency in ['pobj', 'dobj'] & $a1 \in A$ is in conjunction of children(child)=child1 & VERB in conjunction of word & t1 is 'aux' of VERB then $L \leftarrow (Type(t1), t1, a1, t1.index)$ // Rule 6 +26: If child==t[0] & childdependency is 'aux' & child.index in indices & VERB in conjunction of word & t1 is 'aux' of VERB & 'agent' in children(conjunction VERB)=child1dependency & $a2 \in A$ in children(child1) then $L \leftarrow (Type(t), t, a2, child.index)$ // Rule 7 +27: If child==t[0] & childdependency is 'aux' & child.index in indices & VERB in conjunction of word & t1 is 'aux' of VERB & not Rule 7 then $L \leftarrow (Type(t1), t1, Agent(t), t1.index)$ // Rule 8 +28: end for +29: end if +30: end for +31: end if +32: end for +33: end for + +Table 13 shows the unlabeled metric scores for generalizability to rental and employment contracts. + +
ModelPrecisionRecallF1
ALeaseBERT82.358.0914.74
Ours8.5387.2815.54
+ +Table 14: Results from the red flag detection case study. Our (ALeaseBERT) denotes RoBERTa-L model trained on LEXDEMOD (Red flags dataset (Leivaditi et al., 2020)). + +A.9 Annotated Examples for Deontic Types Samples annotations are provided in Table 15. + +# A.8 Case Study: Red flag Detection + +Evaluation scores for the red flag detection case study are presented in Table 14. + +# Instructions + +Welcome! In this project, you will read sentences from lease agreements. This qualification HIT will test your ability to understand legal text to identify obligations, permissions, prohibitions, and entitlements of an entity (e.g. landlord, tenant, etc.) in the lease agreements. These instructions will help you in understanding the task better. The task involves two types of annotations: + +1. Selecting category/categories expressed in a given sentence with respect to a given entity. +2. Highlighting word/words (AKA "triggering span") that evoke the selected category. + +# Categories to select: + +Obligation The entity is required to have/do something + +Entitlement The entity has the right to have/do something +Permission The entity is allowed to have/do something + +Prohibition The entity is forbidden or not allowed to have/do something +No Obligation The entity is not required to have/do something or allowed to not have/do something + +No Entitlement The entity has no right to have/do something + +# What is a trigger? + +Trigger is a word/words that evoke the expressed category in a sentence. Please DO NOT include the action (eg. What the +obligation/permission/prohibition/entitlement is in the triggering span. + +Please refer to examples below. Triggering spans are boldraced in the below examples. + +Obligation (with Good Annotation) + +respect to Tenant) 1. Tenant shall/will pay the rent to the Landlord. + +2. Tenant is responsible for paying rent timely to the Landlord. +3. Tenant agrees to take over the lease of this premises from the effective date. +4. Tenant hereby acknowledges that it is familiar with the condition of the premises and accepts the Premises in its "as is" condition with all faculty. +NOTE: Please highlight all the occurrences of a category (one at a time) as in e.g. 4 + +Bad Annotation + +1. Landlord shall pay to the Tenant for the maintenance of the premises. +Explanation: This is an obligation with respect to Landlord and not the Tenant. (wrong category) + +2. The provisions of this Section shall survive the expiration or earlier termination of this Lease. +Explanation: This is a rule and not an obligation for the Tenant. No category is expressed with respect to Tenant. (wrong) +category) + +3. Landlord and Tenant agree as follows: The xtension options are conditioned upon each Gurantor. +Explanation: 'as follows' is extra phrase and should not be included in the triggering span. (wrong span) + +Entitlement (with Good Annotation + +respect to Landlord +$\therefore m - 1 \neq 0$ ; + +Bad Annotation +1. Tenant shall pay the rent to the Landlord. + +Explanation: Triggering span should not include action ('pay'). (wrong span) + +2. Landlord will have the right to sell the property at anytime. +Explanation: Triggering span should not include the action ('sell the property'). (wrong span) +3. Landlord may install solar panels on the roof of the premises which will generate electricity +Explanation: This is an example of 'permission' category and not 'entitlement' (wrong category) + +Permission (with Good Annotation +respect to Tenant) +Good Annotation + +$\therefore m - 1 \neq 0$ ; + +1. Tenant may/ can park vehicle in the parking space of the premises. +2. Tenant is allowed to park vehicle in the parking space of the premises. +3. Tenant may enter the premises at any time and make any repairs as may be required or permitted pursuant to this Lease and for any other business purpose. + +Bad Annotation + +1. The following are conditions precedent to a Transfer or to Landlord considering a request by Tenant to a Transfer. +Explanation: Tenant is not permitted to do something in this sentence, so no category is expressed. (wrong category) +3. To request any other information or details on any of the above items, please be provided as permitted pursuant to this章程 and +2. Tenant may enter the premises at any time and make any repairs as may be required or permitted pursuant to this Lease and for any other business purpose. +Explanatio: The second occurrence of may is not expressing any permission for the tenant. So, that should not be highlighted +(wrong span) + +Prohibition (with + +respect to Tenant) + +Good Annotation + +1. Tenant shall not, in any case, cause damage to the leased property. +2. Tenant's prior written consent which shall not be unreasonably withheld or delayed +3. Noise or vibrations from Landlord's material shall not be considered objectionable by Tenant. + +Good Annotation (in case of negation words) + +1. No agreement with any condemning authority in settlement of or under threat of any condemnation or other eminent +domain proceedings shall be made by either Landlord or Tenant without the written consent of the other. +NOTE: Please highlight the whole negation part as trigger in such cases as non-contiguous highlighting is not allowed. + +Bad Annotation + +1. Tenant acknowledges that no tenant improvements , replacements , or upgrades to the Premises are provided for , or shall be made to the Premises by Landlord +Explanation: Tenant is not prohibited to do something here. This is an example of obligation where Tenant is agreeing to the +terms that no improvements will be provided. Hence the correct category is 'obligation' and triggering span is 'acknowledges'. +(wrong category, wrong span) + +No Obligation (with + +respect to Tenant) + +Good Annotation +1. Tenant is not obliged to pay for the maintenance before the start of the lease. +2. Tenant shall not be required to provide Landlord with five days prior notice of emergency alterations. +3. Tenant makes no representation or warranty of any kind with respect to the Premises. + +Good Annotation (in case of negation words) +1. In no event shall Tenant have an obligation for any defects in the Premises or any limitation on its use. + +Bad Annotation + +1. In no event shall Tenant have an obligation for any defects in the Premises or any limitation on its use. +Explanation: In case of negation words please include the words starting from the negation word in the triggering span. So +the correct triggering span here is 'In no event shall Tenant have an obligation'. (wrong span) + +No Entitlement (with + +respect to Tenant) + +Good Annotation (in case of negation words) +1. In no event, however, shall Tenant have a right to terminate the Lease. + +No Category + +Expressed (with + +respect to Landlord) + +Good Annotation +1. The cost of such additions or modifications made by Landlord shall be included in Operating Expenses pursuant to +Paragraph 6 of this Lease +2. The furnishing of insurance required hereunder shall not be deemed to limit Tenant's obligations under this section. +3. The prior written consent of Landlord to such sublease shall not be required , provided that the sublease shall remain +subordinate to this Lease. +4. The obligation of Tenant to pay Base Rent and other sums to Landlord and the obligations of Landlord under this Lease are +independent oblig +Bad Annotation +1. The cost of such additions or modifications made by Landlord shall be included in Operating Expenses pursuant to +Paragraph 6 of this Lease. +Explanation: This is a rule but not directly an obligation to the Landlord. So, no category is expressed. (wrong category, wrong +span) + +# Contents of this test + +You will be asked two kinds of questions to test your ability to do the two types of annotations. + +1. Which of the category is expressed in the below sentence with respect to entity? +2. Which of the following word/words is the correct trigger for a category with respect to an entity in the given sentence? + +Figure 5: Instructions and examples provided to the annotators. + +![](images/a8e3bd705da68831f6fd5bf597b0fe5574d8a3d0e4c51450ff8628134b8647ca.jpg) +Figure 6: Annotation Interface. + +
TypeExamples
Obl[tenant] Tenant shall repair any damage resulting from such removal and shall restore the Property to good order and condition. +[tenant] Tenant acknowledges and agrees that Landlord shall have the right to adopt reasonable rules and regulations for the use and/or occupancy of the Leased Premises and Tenant agrees that it shall at all times observe and comply with such rules and regulations.
Ent[tenant] Tenant shall also have the right to use the roof riser space of the Building. +[lessor] Rent shall be payable at Lessor's place of business, or such other place as Lessor may direct from time to time. +[landlord] Landlord reserves the right to modify Common Areas, provided that such modifications do not materially adversely affect Tenant's access to or use of the Premises for the Permitted Use.
Pro[lessee] Lessee shall not commit or allow waste to be committed on the Premises, and Lessee shall not allow any hazardous activity to be engaged in upon the Premises. +[lessor] Neither Lessor nor Lessee may record this Lease nor a short - form memorandum thereof.
Per[tenant] Tenant may, without Landlord's consent, before delinquency occurs, contest any such taxes related to the Personal Property. +[lessor] Additional keys may be furnished at a charge by Lessor.
Nobl[tenant] For the avoidance of doubt, to the extent there is a bank vault in the Premises, Tenant shall have no obligation to remove such vault on surrendering the Premises. +[lessor] Further, in no event shall Lessor have any obligation to repair any damage to, or replace any of Lessee's furniture, trade fixtures, equipment or other personal property.
Nent[landlord] Landlord hereby waives the right to any revenue that may be generated as a result of the use of the roof by Tenant or any other third - parties pursuant to the terms of the Lease during the Term. +[lessee] The Lessee will not be entitled to a reimbursement of any part of the Rent, even if in practice the Building Capacity for which it has paid has not been utilized.
None[lessor] For the avoidance of doubt, it is hereby clarified that wherever the word Lessor is written this means: "the Lessor and/or anyone acting on its behalf". +[landlord] Other than the Purchase Agreement, this Lease represents the entire agreement and understanding between Landlord and Tenant with respect to the subject matter herein, and there are no representations, understandings, stipulations, agreements or promises not incorporated in writing herein.
+ +Table 15: Sample annotated sentences for each deontic type with respect to an [Agent] and trigger annotations in bold-face. \ No newline at end of file diff --git a/agentspecificdeonticmodalitydetectioninlegallanguage/images.zip b/agentspecificdeonticmodalitydetectioninlegallanguage/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..f12ff71ed2c380e7b4f43b482cc54f588e51317d --- /dev/null +++ b/agentspecificdeonticmodalitydetectioninlegallanguage/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:af76d4b143977fd89f26cbb91681f8b4524272f3f50d3fc55e8d0f29ca2023df +size 937856 diff --git a/agentspecificdeonticmodalitydetectioninlegallanguage/layout.json b/agentspecificdeonticmodalitydetectioninlegallanguage/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..263b68766fe084ad281d087afecef95fc4bb8512 --- /dev/null +++ b/agentspecificdeonticmodalitydetectioninlegallanguage/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:436f0cfee477144c08fe6c8620f12268ad0ae6c0c223b234b3248c29f7adc156 +size 656019 diff --git a/agoodneighborafoundtreasureminingtreasuredneighborsforknowledgegraphentitytyping/d4929f2e-7ee0-446a-87fd-4d3cf2a4a837_content_list.json b/agoodneighborafoundtreasureminingtreasuredneighborsforknowledgegraphentitytyping/d4929f2e-7ee0-446a-87fd-4d3cf2a4a837_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..2768806bdffc83678873e19acd3f37378ec03b0d --- /dev/null +++ b/agoodneighborafoundtreasureminingtreasuredneighborsforknowledgegraphentitytyping/d4929f2e-7ee0-446a-87fd-4d3cf2a4a837_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d055fd6224fa2e76928514392acd4c01783356b3a56e8e56dd72b6e14f66f1d6 +size 78914 diff --git a/agoodneighborafoundtreasureminingtreasuredneighborsforknowledgegraphentitytyping/d4929f2e-7ee0-446a-87fd-4d3cf2a4a837_model.json b/agoodneighborafoundtreasureminingtreasuredneighborsforknowledgegraphentitytyping/d4929f2e-7ee0-446a-87fd-4d3cf2a4a837_model.json new file mode 100644 index 0000000000000000000000000000000000000000..4b8d8890ca61d0cb23ad1b813a18bc9b70612c85 --- /dev/null +++ b/agoodneighborafoundtreasureminingtreasuredneighborsforknowledgegraphentitytyping/d4929f2e-7ee0-446a-87fd-4d3cf2a4a837_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:73e2caba8cd85cac4398d0666b7274608ff94ceb94055c0f392790363398a7a2 +size 94029 diff --git a/agoodneighborafoundtreasureminingtreasuredneighborsforknowledgegraphentitytyping/d4929f2e-7ee0-446a-87fd-4d3cf2a4a837_origin.pdf b/agoodneighborafoundtreasureminingtreasuredneighborsforknowledgegraphentitytyping/d4929f2e-7ee0-446a-87fd-4d3cf2a4a837_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..1747f5784f05eb2d10d3b66fb9203853d0c2864b --- /dev/null +++ b/agoodneighborafoundtreasureminingtreasuredneighborsforknowledgegraphentitytyping/d4929f2e-7ee0-446a-87fd-4d3cf2a4a837_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:751e8a60f9ba6f4ebf936f5e876b206e204f856206f2241b319d195911889945 +size 488517 diff --git a/agoodneighborafoundtreasureminingtreasuredneighborsforknowledgegraphentitytyping/full.md b/agoodneighborafoundtreasureminingtreasuredneighborsforknowledgegraphentitytyping/full.md new file mode 100644 index 0000000000000000000000000000000000000000..2e2e6b19d7ef3fd223ed331ea9b0cec010d455a7 --- /dev/null +++ b/agoodneighborafoundtreasureminingtreasuredneighborsforknowledgegraphentitytyping/full.md @@ -0,0 +1,348 @@ +# A Good Neighbor, A Found Treasure: Mining Treasurer Neighbors for Knowledge Graph Entity Typing + +Zhuoran Jin $^{1,2}$ , Pengfei Cao $^{1,2}$ , Yubo Chen $^{1,2}$ , Kang Liu $^{1,2,3}$ , Jun Zhao $^{1,2}$ + +$^{1}$ School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China + +$^{2}$ National Laboratory of Pattern Recognition, Institute of Automation, CAS, Beijing, China + +3 Beijing Academy of Artificial Intelligence, Beijing, China + +{zhuoran.jin, pengfei.cao, yubo.chen, kliu, jzhao}@nlpr.ia.ac.cn + +# Abstract + +The task of knowledge graph entity typing (KGET) aims to infer the missing types for entities in knowledge graphs. Some pioneering work has proved that neighbor information is essential for the task. However, existing methods only leverage the one-hop neighbor information of the central entity, ignoring the multi-hop neighbor information that can provide valuable clues for inference. Besides, we also observe that there are co-occurrence relations between types, which is very helpful in alleviating the false-negative problem. In this paper, we propose a novel method called Mining Treasured Neighbors (MiNer) to make use of these two characteristics. Firstly, we devise a Neighbor Information Aggregation module to aggregate the neighbor information. Then, we propose an Entity Type Inference module to mitigate the adverse impact of the irrelevant neighbor information. Finally, a Type Co-occurrence Regularization module is designed to prevent the model from overfitting the false-negative examples caused by missing types. Experimental results on two widely used datasets indicate that our approach significantly outperforms previous state-of-the-art methods. $^{1}$ + +# 1 Introduction + +Knowledge graphs (KGs) store huge amounts of structured data in the form of triples, i.e., (head entity, relation, tail entity). Each entity in KGs is labeled with one or more types. As shown in Figure 1, the entity Einstein not only belongs to Scientist type, but also Physicist type. The entity type information is very important and can benefit many natural language processing (NLP) applications, such as entity linking (Chen et al., 2018), relation extraction (Vashisth et al., 2018), knowledge graph embedding (Xie et al., 2016) and text generation (Dong et al., 2021). + +![](images/710103b2328a0531147b5f82cf1008af5c1f3297061d0d36d3dea58570234096.jpg) +Figure 1: An example of a KG fragment. Large brown circles denote entities and small red circles denote types. Brown solid lines denote relations between entities, red solid lines denote existing types of entities and red dotted lines denote missing types of entities. + +Unfortunately, KGs usually suffer from entity type incompleteness problems. More specifically, one entity may have multiple types, while the annotated entity type information is usually incomplete. Take Figure 1 as an example, the entity Newton should be labeled with Scientist, Mathematician, Physicist and Englander types. However, only the Scientist type is annotated in the KG. According to the statistics on the FB15k (Moon et al., 2017), $10\%$ of entities have the /music/artist type, but missing the /people/person type, which indicates the type incompleteness problem is not negligible. Therefore, we focus on knowledge graph entity typing (KGET), which aims to infer the missing types from existing types for entities in KGs. + +Great efforts have been devoted to tackling the KGET task, which can be mainly divided into embedding-based methods (Moon et al., 2017; Zhao et al., 2020) and graph neural network-based methods (Pan et al., 2021; Zhuo et al., 2022). Embedding-based methods focus on learning low-dimensional vector representations of entities, rela + +tions and entity types, then predict missing types based on a scoring function. Although embedding-based methods are simple and intuitive, they ignore the rich neighbor information of entities. By contrast, graph neural network-based methods effectively leverage the neighbor information by modeling it as the graph-structured data to infer the missing types, which has shown to be the most effective for the KGET task (Pan et al., 2021). Despite these successful efforts, existing methods ignore the Multi-hop Neighbor Information, and Type Cooccurrence Information, which is very important for the KGET task. + +Multi-hop neighbor information can provide more valuable clues for inference. For example in Figure 1, based on the one-hop neighbor information alone, it is difficult to predict the entity Newton has Mathematician type. Fortunately, multi-hop neighbor information can provide more conclusive clues. For instance, there are facts that Newton and Leibniz (two-hop neighbor) both invent Calculus, and Leibniz has Mathematician type. Combining the two, the model can easily conclude Newton has Mathematician type. Nevertheless, aggregating multi-hop neighbors may bring irrelevant information. As shown in Figure 1, only the triple (Newton, Born in, England) plays a decisive role in inferring the Englander type for Newton, while others contribute less. Therefore, the first challenging problem is how to mine treasured neighbor information for inference. + +Type co-occurrence information can facilitate alleviating the false-negative problem. Some entity types should have been stored in KGs, but they are not marked. Most existing methods simply treat them as negative samples. Therefore, they face a serious false-negative problem that affects model performance. We observe a rich amount of type co-occurrence information in KGs, which can be used to address the false-negative problem. For instance, both Scientist type and Physicist type often go hand in hand. However, Scientist type and Actor type rarely belong to the same entity. If the model makes use of the prior knowledge, it can alleviate the memorization of false labels and accurately predict the missing types. Therefore, the second challenging problem is how to leverage the type co-occurrence information. + +In this paper, we propose a novel method termed as Mining Treasurer (MiNer) to address aforementioned problems. The proposed + +method consists of three modules: Neighbor Information Aggregation module, Entity Type Inference module and Type Co-occurrence Regularization module. First, the Neighbor Information Aggregation module aims to aggregate more neighbor information, including one-hop neighbor information and multi-hop neighbor information. This module can generalize to any number of hops. To mitigate the adverse impact of the irrelevant neighbor information, the Entity Type Inference module mines the valuable neighbor information for central entities via two ways: type-specific local inference and type-agnostic global inference. In addition, we leverage the type co-occurrence information to alleviate the memorization of the false labels. We propose Type Co-occurrence Regularization module to correct false negative examples caused by missing types. Experimental results on two widely used datasets indicate that our approach significantly outperforms previous state-of-the-art methods. + +Our contributions are summarized as follows: + +- We propose a novel method called MiNer, designed to aggregate both one-hop and multi-hop neighbors, then mine valuable information for missing type inference. +- We notice the strong correlations between different types and propose type co-occurrence regularization to mitigate the impact of the false-negative problem. +- We conduct thorough experiments with ablation studies on two widely used datasets, demonstrating our approach significantly outperforms previous state-of-the-art methods. + +# 2 Related Work + +Knowledge graph entity typing (KGET) is an essential sub-task of knowledge graph completion (KGC) that has been researched for decades. Existing approaches for the task can be mainly divided into embedding-based methods and graph neural network-based methods. + +Embedding-based Methods. The entities with known types can be treated as as special triples with a unique relation "Has type", e.g., (Newton, Has type, Physicist). In this way, the KGET can be formulated as a link prediction task. Existing knowledge graph embedding (KGE) methods (Bordes et al., 2013; Wang et al., 2014; Trouillon et al., 2016; Sun et al., 2019; Chao et al., 2021) can be + +used to infer the missing types of Newton by completing the triple (Newton, Has type,?). + +Although using KGE methods can address the KGET task to some degree, it lacks the diversity of relation types. Therefore, Moon et al. (2017) propose the ETE model to tackle this problem. ETE first learns entity embeddings and relation embeddings with a KGE model on KGs that do not have entity types, then trains the embedding of each entity to be closer to the embedding of its type. For better expressing and reasoning capability, Zhao et al. (2020) propose the ConnectE model, which considers both local entity typing information and global triple knowledge in KGs. ConnectE first uses TransE (Bordes et al., 2013) to obtain the entity embeddings, then infers the missing types according to two inference mechanisms. One is E2T mechanism that focuses on mapping entities from entity space to entity types space. Another is TRT mechanism, which is based on the assumption that the relation can remain unchanged when replacing the entities in the triple with their types. + +Graph Neural Network-based Methods. Although embedding-based methods are simple and intuitive, they ignore the rich neighbor information. Graph neural network (GNN) has been proved to be quite successful in modeling graph-structured data. Considering that KG is also a kind of graph-structured data, existing GNN methods, such as RGCN (Schlichtkrull et al., 2018), GAT (Velicković et al., 2018), WGCN (Shang et al., 2019) and CompGCN (Vashishth et al., 2020), can be used to aggregate the neighbor information better. + +Pan et al. (2021) argue this may introduce irrelevant information as noise and affect the performance of entity typing. Pan et al. (2021) propose the CET, which fully utilizes the neighbor information according to N2T mechanism and Agg2T mechanism. N2T mechanism can independently use each neighbor to infer the missing types of central entities. Agg2T mechanism is designed to aggregate neighbors to infer the missing types. Zhuo et al. (2022) present the AttEt to capture the different weight distribution of the fine-grained entity types on each neighbor. + +Despite the vast progress that the KGET task has made in recent years, existing methods only leverage the one-hop neighbor information, and ignore the multi-hop neighbor information and type co-occurrence information that is very important for the KGET task. + +Other Methods. Zhao et al. (2022) propose a multiplex relational graph attention network as the encoder to learn embeddings and then use ConnectE as the decoder to make entity type inference. There are also some methods (Neelakantan and Chang, 2015; Jin et al., 2018, 2019) that use additional information (i.e., entity name, text description, and property) to infer the missing types. + +# 3 Task Definition + +Formally, we consider a KG $\mathcal{G}$ containing the triples in the form of $(e,r,\tilde{e})$ and the entity type information in the form of $(e,t)$ , where $e,\tilde{e}\in \mathcal{E}$ , $r\in \mathcal{R}$ , $t\in \mathcal{T}$ , and $\mathcal{E},\mathcal{R},\mathcal{T}$ are the entity set, relation set and entity type set, respectively. One entity may have multiple types, while the annotated entity type is usually incomplete. The neighborhoods of entities can provide more valuable and richer information for inferring missing types. Following the setting of Pan et al. (2021), we also regard the existing types of each entity as its one-hop neighbors. Therefore, this paper aims to infer the missing types by considering the neighbor information (i.e., one-hop neighbor information and multi-hop neighbor information) of entities. + +# 4 Method + +Our approach is shown in Figure 2, which consists of three primary components: (1) Neighbor Information Aggregation, which aggregates the information from one-hop neighbors and multi-hop neighbors; (2) Entity Type Inference, which mines valuable neighbor information for inferring missing types; and (3) Type Co-occurrence Regularization, which prevents the model from overfitting false-negative samples by using type co-occurrence information. We will detail these three modules as follows. + +# 4.1 Neighbor Information Aggregation + +As mentioned above, it is not enough to solely use one-hop neighbors, multi-hop neighbors are also important. Neighbor information aggregation module aims to aggregate the two kinds of neighbor information. + +One-hop Neighbor Aggregation. The one-hop neighbors of an entity are the most straightforward treasure for inferring the entity types. We first unify the one-hop outgoing neighbors and incoming neighbors as the one-hop neighbors of the central entity. Then, we follow the translational + +![](images/0fa312ba2029085fa03acaf2a8d145d50a83528225452f68827292eccc0d9901.jpg) +Figure 2: The main architecture of MiNer, consists of three primary modules. The blue circle denotes the central entity, the green circles denote the one-hop neighbors, the yellow circles denote the multi-hop neighbors. + +assumption of TransE (Bordes et al., 2013) to obtain the one-hop neighbor information. We choose TransE for its simplicity and efficiency. Formally, for the central entity $e$ , its representation aggregated from one-hop neighbor $(r, \tilde{e})^2$ can be computed as follows: + +$$ +\boldsymbol {h} _ {(r, \tilde {e}), 1} = \tilde {\boldsymbol {e}} - \mathbb {I} (\boldsymbol {r}), \tag {1} +$$ + +where $(e,r,\tilde{e})$ is a triple in the KG $\mathcal{G}$ . $\tilde{e}$ and $\pmb{r}$ denote the representations of the entity $\tilde{e}$ and relation $r$ , respectively. $\mathbb{I}(\cdot)$ is a function that equals to 1 if $(e,r,\tilde{e})\in \mathcal{G}$ , or equals to $-1$ if $(\tilde{e},r,e)\in \mathcal{G}$ . $\pmb{h}_{(r,\tilde{e}),1}\in \mathbb{R}^d$ is the representation aggregated from the one-hop neighbor $(r,\tilde{e})$ . + +Multi-hop Neighbor Aggregation. In addition to one-hop neighbors, we also consider multi-hop neighbors, which can provide valuable inference evidence. For multi-hop neighbor aggregation, the main idea is: we iteratively represent the $(n - 1)$ -hop neighbors by the $n$ -hop neighbors and then represent the central entity by its one-hop neighbors. We first take two-hop neighbors as an example to illustrate the aggregation process, and then generalize it to the case of more hops. Formally, for the central entity $e$ , its representation aggregated from its two-hop neighbor can be computed as follows: + +$$ +\boldsymbol {h} _ {(r, \tilde {e}), 2} = \mathcal {M} _ {2} (\tilde {e}) - \mathbb {I} (\boldsymbol {r}), \tag {2} +$$ + +where $\pmb{h}_{(r,\tilde{e}),2} \in \mathbb{R}^d$ is the representation aggregated from two-hops neighbor via the one-hop + +neighbor $(r,\tilde{e})$ . $\mathcal{M}_2(\cdot)$ is used to compute the one-hop neighbor representation via two-hop neighbor representations of the central entity. Its calculation can be defined as follows: + +$$ +\mathcal {M} _ {2} (e) = \frac {1}{| \mathcal {N} (e) |} \sum_ {\left(r _ {i}, \tilde {e} _ {i}\right) \in \mathcal {N} (e)} \left(\tilde {\boldsymbol {e}} _ {i} - \mathbb {I} \left(\boldsymbol {r} _ {i}\right)\right), \tag {3} +$$ + +where $\mathcal{N}(e)$ denotes the one-hop neighbors of the entity $e$ . To generalize our method, we consider aggregating the multi-hop neighbor information within $h$ ( $h \geq 3$ ) hops, which can be computed as follows: + +$$ +\boldsymbol {h} _ {(r, \tilde {e}), h} = \mathcal {M} _ {h} (\tilde {e}) - \mathbb {I} (\boldsymbol {r}), +$$ + +$$ +\mathcal {M} _ {h} (e) = \frac {1}{| \mathcal {N} (e) |} \sum_ {\left(r _ {i}, \tilde {e} _ {i}\right) \in \mathcal {N} (e)} \left(\rho_ {1} \left(\tilde {e} _ {i}\right) - \mathbb {I} _ {i} \left(\boldsymbol {r} _ {i}\right)\right), \tag {4} +$$ + +$$ +\rho_ {j} (e) = \alpha_ {j} \boldsymbol {e} + \frac {1 - \alpha_ {j}}{| \mathcal {N} (e) |} \sum_ {\left(\boldsymbol {r} _ {i}, \tilde {\epsilon} _ {i}\right) \in \mathcal {N} (e)} \left(\rho_ {j + 1} (\tilde {\epsilon} _ {i}) - \mathbb {I} _ {i} (\boldsymbol {r} _ {i})\right), +$$ + +where $\mathcal{M}_h(\cdot)$ denotes to aggregate the $h$ -hop neighbor information, $\rho_j(\cdot)$ is calculated by the skip connection of $j$ -hop neighbors and $(j + 1)$ -hop neighbors, and $\rho_j(e) = e$ when $j = h - 2$ . + +# 4.2 Entity Type Inference + +In fact, different neighbors have different effects on different types of the central entity. For example, the Englander type of Newton can only be indicated by a few neighbors (i.e., England), and most of the neighbors (i.e., Einstein and Leibniz) are irrelevant. Therefore, we need to mitigate the adverse effect of the useless neighbor information. Inspired by the class-specific residual attention (CSRA) (Zhu and Wu, 2021) that captures different spatial regions occupied by objects from different categories, we + +propose the entity type inference module to capture the accurate and useful neighbor features. We first use the non-linear classifier to compute the score vector $s_i$ for $i$ -th neighbor $(r_i, \tilde{e}_i)$ : + +$$ +\boldsymbol {s} _ {i} = \boldsymbol {W} \left(\sigma \left(\boldsymbol {h} _ {i}\right)\right), \tag {5} +$$ + +where $\pmb{W} \in \mathbb{R}^{|\mathcal{T}| \times d}$ is the weight matrix. $|\mathcal{T}|$ is the total number of entity types. $\sigma(\cdot)$ is the activation function (e.g., ReLU). $\pmb{h}_i = [h_i^1, h_i^2, \dots, h_i^d]^T \in \mathbb{R}^d$ is the hidden representation of the central entity aggregated from the $i$ -th neighbor, $\pmb{s}_i = [s_i^1, s_i^2, \dots, s_i^{|\mathcal{T}|}]^T \in \mathbb{R}^{|\mathcal{T}|}$ is the score vector of the $i$ -th neighbor, $s_i^j$ indicates the probability score for inferring $j$ -th type based on the neighbor $(r_i, \tilde{e}_i)$ . According to the score vector for each neighbor, we predict the central entity types by type-specific local inference and type-agnostic global inference. + +Type-specific Local Inference. Since different neighbors have different effects on different types of central entity, we devise a type-specific attention. Formally, for the central entity $e$ , we define the type-specific attention weight $a_{i}^{j}$ for its $i$ -th neighbor $(r_{i},\tilde{e}_{i})$ and $j$ -th type as: + +$$ +a _ {i} ^ {j} = \frac {\exp \left(s _ {i} ^ {j} / T\right)}{\sum_ {\left(r _ {k} , \tilde {e} _ {k}\right) \in \mathcal {N} (e)} \exp \left(s _ {k} ^ {j} / T\right)}, \tag {6} +$$ + +where $T$ is the temperature controlling the weight's sharpness, $a_{i}^{j}$ indicates the importance of $i$ -th neighbor for inferring $j$ -th type. Then, we can compute the type-specific local score of $j$ -th type: + +$$ +n ^ {j} = \sum_ {\left(r _ {k}, \tilde {e} _ {k}\right) \in \mathcal {N} (e)} a _ {i} ^ {j} s _ {k} ^ {j}. \tag {7} +$$ + +Therefore, we can represent the type-specific local score vector for the central entity $e$ as $\pmb{n} = [n^{1}, n^{2}, \dots, n^{|T|}]^{T}$ . + +Type-agnostic Global Inference. If two entities have similar types, their hidden representations should be close. It is necessary to represent the entities well in semantic space. According to the vanilla GCN (Kipf and Welling, 2017), we encode the central entity $e$ as the average pooling of the hidden representations of its neighbors. Therefore, the type-agnostic score can be computed as follows: + +$$ +\boldsymbol {g} = \frac {1}{| \mathcal {N} (e) |} \sum_ {\left(r _ {k}, \tilde {e} _ {k}\right) \in \mathcal {N} (e)} \boldsymbol {s} _ {k}, \tag {8} +$$ + +where $\pmb{g}$ means the type-agnostic global score vector of the entity $e$ . + +Type Probability Prediction. We combine the type-specific local score and type-agnostic global score together to get the mixed score $\pmb{u}$ : + +$$ +\boldsymbol {u} = \beta_ {1} \boldsymbol {n} + \beta_ {2} \boldsymbol {g}, \tag {9} +$$ + +where $\beta_{1}$ and $\beta_{2}$ are hyper-parameters for balance. According to CSRA (Zhu and Wu, 2021), we use the multi-head attention mechanism to compute the final score $f$ : + +$$ +\boldsymbol {f} = \sum_ {i = 1} ^ {H} \boldsymbol {u} _ {T _ {i}}, \tag {10} +$$ + +where $H$ is the number of attention heads, $\pmb{u}_{T_i}$ is the mixed score at temperature $T_{i}$ . We predict the type probability $\pmb {p} = [p_1,\dots ,p_{|\mathcal{T}|}]^T\in \mathbb{R}^{|\mathcal{T}|}$ based on both one-hop and multi-hop neighbors: + +$$ +\boldsymbol {p} = \phi (\lambda \boldsymbol {f} ^ {1} + (1 - \lambda) \boldsymbol {f} ^ {h}), \tag {11} +$$ + +where $\pmb{f}^{1} \in \mathbb{R}^{|T|}$ means the one-hop neighbors' final score and $\pmb{f}^{h} \in \mathbb{R}^{|T|}$ means the multi-hop neighbors' final score, $\lambda$ is a hyper-parameter, $\phi$ denotes the sigmoid activation function. + +# 4.3 Type Co-occurrence Regularization + +As mentioned above, the type co-occurrence information is the overlooked treasure. Meanwhile, due to the missing of partial entity types, simply regarding these missing types as negative types will lead to false-negative samples in the training data. According to early learning phenomenon (Arpit et al., 2017), the model will first fit the training data with clean labels during an early learning phase, then memorize the training data with false labels. Inspired by early learning regularization (Liu et al., 2020), we propose type co-occurrence regularization, which leverages the type co-occurrence statistical information to alleviate the memorization of the false labels: + +$$ +\mathcal {R} _ {T C R} = \frac {1}{| \mathcal {E} |} \sum_ {i = 1} ^ {| \mathcal {E} |} \log (1 - \langle \mathcal {S} (\boldsymbol {p} _ {i} (k)), \boldsymbol {t} _ {i} (k) \rangle), \tag {12} +$$ + +where $\pmb{p}_i(k) \in \mathbb{R}^{|\mathcal{T}|}$ and $\pmb{t}_i(k) \in \mathbb{R}^{|\mathcal{T}|}$ denote the $i$ -th entity's prediction probability and target probability at iteration $k$ of training respectively. $\langle \cdot, \cdot \rangle$ is the inner product function, $S(\cdot)$ is the softmax function. The target can be set as: + +$$ +\begin{array}{l} \boldsymbol {t} _ {i} (k) = \omega \left(\gamma^ {k} \mathcal {S} (\boldsymbol {C} \boldsymbol {p} _ {i} (k)) + \left(1 - \gamma^ {k}\right) \mathcal {S} (\boldsymbol {p} _ {i} (k))\right) \tag {13} \\ + (1 - \omega) \boldsymbol {t} _ {i} (k - 1), \\ \end{array} +$$ + +where $C \in \mathbb{R}^{|\mathcal{T}| \times |\mathcal{T}|}$ is the type co-occurrence matrix, $0 < \omega < 1$ is the momentum, $0 < \gamma < 1$ is the multiplication factor. For those negative types $((e,t) \notin \mathcal{G})$ with high confidence to be positive, we directly correct them to positive (Li et al., 2021). + +# 4.4 Optimization + +For training, we adopt false-negative aware (FNA) loss function (Pan et al., 2021): + +$$ +\begin{array}{l} \mathcal {L} _ {F N A} = - \sum_ {(e _ {i}, t _ {j}) \notin \mathcal {G}} \mu_ {1} \left(p _ {i} ^ {j} - (p _ {i} ^ {j}) ^ {2}\right) \log \left(1 - p _ {i} ^ {j}\right) \tag {14} \\ - \sum_ {(e _ {i}, t _ {j}) \in \mathcal {G}} \log p _ {i} ^ {j}, \\ \end{array} +$$ + +where $p_i^j$ denotes the prediction probability of the $i$ -th entity's $j$ -th type, $\mu_1$ is a hyper-parameter used to control the overall weight of negative samples. The FNA loss function will assign lower weight to those negative examples with too large or too small relevance scores. By combining $\mathcal{L}_{FNA}$ and $\mathcal{L}_{TCR}$ , we can get the final optimization goal: + +$$ +\mathcal {L} = \mathcal {L} _ {F N A} + \mu_ {2} \mathcal {R} _ {T C R}, \tag {15} +$$ + +where $\mu_{2}$ is the hyper-parameter. + +# 5 Experiments + +# 5.1 Datasets and Evaluation Metrics + +Datasets. We evaluate our proposed method on two real-world KGs, including FB15k (Bordes et al., 2013) and YAGO43k (Moon et al., 2017) which are subsets of Freebase (Bollacker et al., 2008) and YAGO (Suchanek et al., 2007), respectively. Two entity typing datasets FB15kET and YAGO43kET (Moon et al., 2017) provide entity type instances by mapping entities from FB15k and YAGO43k into their entity types. The statistics of the two datasets are shown in the Appendix A. + +Evaluation Metrics. For each test sample, we first calculate the relevance score between the entity and every type. Then, we rank these scores in ascending order. For a fair comparison with previous work (Zhao et al., 2020), we also adopt the filtered setting (Bordes et al., 2013) to remove all the known types in the training, validation, and test sets, before calculating score ranking. Following state-of-the-art baselines (Zhao et al., 2020; Pan et al., 2021; Zhuo et al., 2022), we adopt Mean Rank (MR), Mean Reciprocal Rank (MRR) and Hits@{1,3,10} as evaluation metrics. + +# 5.2 Implementation Details + +Our implementation is based on DGL $^3$ and Pytorch $^4$ . We use the Adam algorithm (Kingma and Ba, 2015) to optimize model parameters. The learning rate is initialized as 1e-3. The embedding dimension is set to 100, the same as previous methods to ensure fairness. All experiments are conducted with NVIDIA GeForce RTX 3090 GPUs. We select the best model leading to the highest MRR on the validation set. The best-performance hyperparameter settings are listed in the Appendix B. + +# 5.3 Baselines + +We compare our approach MiNer with previous state-of-the-art methods, which can be divided into two categories: + +**Embedding-based methods:** Firstly, we compare our method with classical knowledge graph embedding methods, including TransE (Bordes et al., 2013), ComplEx (Trouillon et al., 2016) and RotatE (Sun et al., 2019). Then we compare with two methods proposed specifically for the KGET task, including ETE (Moon et al., 2017) and ConnectE (Zhao et al., 2020). + +Graph neural network-based methods: We also compare our method with more competitive graph neural network-based methods, including RGCN (Schlichtkrull et al., 2018), CET (Pan et al., 2021), AttEt (Zhuo et al., 2022) and ConnectEMRGAT (Zhao et al., 2022). + +# 5.4 Overall Results + +The performance of all the methods on the FB15kET and YAGO43kET datasets is shown in Table 1. We note the following key observations throughout our experiments: + +(1) Our method outperforms all the baselines by a large margin on the two datasets. For example, compared with the state-of-the-art model CET (Pan et al., 2021), our method MiNer achieves $3.1\%$ and $1.8\%$ improvements of MRR on the FB15kET and YAGO43kET, respectively. It indicates that our proposed method is very effective for this task. +(2) Compared with the embedding-based methods, graph neural network-based methods achieve better performance. This suggests that the neighbor information is important for the task. However, most graph neural network-based methods only uti + +
ModelFB15kETYAGO43kET
MRRMRHit@1Hit@3Hit@10MRRMRHit@1Hit@3Hit@10
Embedding-based Methods
TransE0.618180.5040.6860.8350.4273930.3040.4970.663
ComplEx0.595200.4630.6800.8410.4356310.3160.5040.658
RotatE0.632180.5230.6990.8400.4623160.3390.5370.695
ETE0.500-0.3850.5530.7190.230-0.1370.2630.422
ConnectE0.590-0.4960.6430.7990.280-0.1600.3090.479
Graph Neural Network-based Methods
R-GCN (h=1)0.679200.5970.7220.8430.3723970.2810.4090.549
R-GCN (h=2)0.664290.5800.7090.8300.3605870.2730.3920.532
MRGAT (h=2)0.630-0.5620.6630.8040.320-0.2430.3430.482
AttEt (h=1)0.620-0.5170.6770.8210.350-0.2440.4130.565
CET (h=1)0.697190.6130.7450.8560.5032500.3980.5670.696
Our Method
MiNer0.728150.6540.7680.8750.5212230.4120.5890.714
+ +Table 1: Experimental results on the FB15kET and YAGO43kET datasets. Bold denotes best results. The results of the baselines are taken from corresponding original papers. $h$ denotes the number of hops. + +
SettingFB15kETYAGO43kET
MRRMRHit@1Hit@3Hit@10MRRMRHit@1Hit@3Hit@10
Baseline (w/o Neighbor Information)
RotatE0.632180.5230.6990.8400.4623160.3390.5370.695
Our Method
One-hop Neighbor0.716180.6370.7610.8650.5122450.4020.5800.710
Multi-hop Nievesh=20.724150.6470.7660.8730.4992850.3870.5720.701
h=30.721170.6440.7640.8730.4992660.3860.5710.702
h=40.726160.6520.7660.8710.5022720.3900.5730.701
One-hop Neighbor +h=20.726150.6530.7640.8710.5212230.4120.5890.714
h=30.726160.6510.7660.8720.5202240.4110.5890.713
Mult-hop Nievesh=40.728150.6540.7680.8750.5182450.4090.5870.711
+ +Table 2: Experimental results by using different neighbors on the FB15kET and YAGO43kET datasets. + +lize one-hop neighbor information, ignoring multi-hop neighbor information. + +(3) Traditional graph neural networks can aggregate multi-hop neighbor information. However, two-layer R-GCN performs worse than the one-layer R-GCN. The reason may be that simple information aggregations introduce a lot of noise. By contrast, our method can effectively mitigate the impact of irrelevant information. + +# 5.5 Effectiveness of Neighbor Information Aggregation + +We validate the effectiveness of neighbor information aggregation module from both one-hop neighbors and multi-hop neighbors. The results are shown in Table 2, we can observe that: + +(1) Both one-hop and multi-hop neighbor information contribute to inferring the missing types. The performance improvement of using the multi + +hop neighbors is more evident than that of using the one-hop neighbors. We guess that the multi-hop neighbors can provide more clues for inference. Moreover, simultaneously using these two kinds of neighbors can further improve the performance. + +(2) For the FB15kET dataset, the performance is best when the hops number $h = 4$ , while the hops number $h = 2$ is enough for the YAGO43kET dataset. This phenomenon may be attributed to the fact that the graph of FB15kET is more sparse than the graph of YAGO43kET. In fact, our MiNer can work under multiple hops numbers, but too many hops will lead to the over-smoothing problem. + +# 5.6 Effectiveness of Entity Type Inference + +We verify the effectiveness of entity type inference module from both type-specific local inference and type-agnostic global inference. The results are shown in Table 3. We have two important + +
SettingFB15kETYAGO43kET
MRRMRHit@1Hit@3Hit@10MRRMRHit@1Hit@3Hit@10
Baseline (w/o Type Inference)
R-GCN0.679200.5970.7220.8430.3723970.2810.4090.549
Our Method
Local0.685190.6060.7240.8430.5042650.3920.5750.704
Global0.684190.6030.7260.8450.3993190.3020.4420.584
Local +H=20.727150.6520.7670.8730.5162110.4070.5860.711
H=30.727150.6530.7680.8740.5162320.4080.5820.710
H=40.727150.6520.7690.8740.5122310.4030.5810.706
H=50.728150.6540.7680.8750.5212230.4120.5890.714
GlobalH=60.726150.6520.7670.8730.5162210.4080.5840.712
+ +![](images/e5ca1f9961045d8f6c0cbdaa68d6035995798e5c834198522dfd00f703067c3c.jpg) +Figure 3: MRR scores for different number of hops settings on the FB15kET dataset. + +# observations: + +(1) For the FB15kET dataset, type-specific local inference and type-agnostic global inference work equally well. For the YAGO43kET dataset, type-specific local inference performs better than type-agnostic global inference. This empirically confirms that type-specific local inference works well with more entity types. +(2) Simultaneously using these two kinds of inference can further improve performance. Meanwhile, the multi-head attention mechanism plays an important role, especially when the number of attention heads $H = 5$ . + +# 5.7 Effectiveness of Type Co-occurrence Regularization + +We validate the effectiveness of type co-occurrence regularization (TCR) module. The results are shown in Figure 3. Overall, we can observe that: + +(1) TCR can further improve the performance of our method. This is because TCR can alleviate the memorization of the false-negative types. +(2) TCR works well under the different number + +Table 3: Experimental results by using different inference methods on the FB15kET and YAGO43kET datasets. "Local" and "Global" refer to "type-specific local inference" and "type-agnostic global inference". $H$ denotes the number of heads. + +
TypeGoldenOne-hopMulti-hopOne+Multi-hop
/award-winning00.6930.0330.106
/legal/topic00.2830.5380.457
/film/actor00.0670.0030.007
/athletics/topic10.9950.3930.780
/naval_combatant10.3810.6980.608
/fictional_settings10.7520.6910.710
+ +Table 4: Prediction probabilities of the entity $/\mathrm{m}/0\mathrm{f}819\mathrm{c}$ (France) for some types. "1" or "0" indicates the entity is with or without this type. + +of hops settings, which indicates that the module is not sensitive to the number of hops. + +# 5.8 Case Study + +We conduct a case study to verify the effectiveness of our method. Table 4 shows some prediction results for the entity $/ \mathrm{m} / 0\mathrm{f819c}$ , which refers to France. We can observe that one-hop neighbors and multi-hop neighbors are both critical. Take /award-winning type as an example, only using one-hop neighbor information makes the wrong inference (i.e., it predicts that $/ \mathrm{m} / 0\mathrm{f819c}$ has this type with higher probability), while our method can make the correct inference based on the multi-hop neighbor information. It proves that the multi-hop neighbor information is essential for the task. + +# 6 Conclusion + +In this paper, we propose a novel method called MiNer to mine treasured neighbors. First, MiNer aggregates one-hop neighbor and multi-hop neighbor information. Then, MiNer predicts the entity types by type-specific local inference and type-agnostic global inference. Finally, we use type co-occurrence regularization to prevent our model from overfitting the false-negative samples. Experi + +mental results on two widely used datasets indicate that our approach significantly outperforms previous state-of-the-art methods. + +# Limitations + +Although our approach has worked well, there are still some limitations to be resolved in the future. The primary limitation is how to perform more efficient inference on the KGs? Our method needs to aggregate all the candidate neighbors, then mine the treasured neighbors for inferring entity types. We call this kind of method Post-mining. Post-mining methods will introduce some unrelated information when aggregating neighbors. However, Pre-mining methods can select valuable neighbors during the aggregation stage. Pre-mining methods are computationally efficient, but designing a reasonable criteria to choose neighbors is non-trivial. We will investigate it in the future work. + +# Acknowledgements + +We thank the anonymous reviewers for their constructive comments. This work is supported by the National Key Research and Development Program of China (No. 2020AAA0106400), the National Natural Science Foundation of China (No. 62176257, 61976211, 61922085). This work is also supported by the Strategic Priority Research Program of Chinese Academy of Sciences (Grant No. XDA27020200), the Youth Innovation Promotion Association CAS, and Yunnan Provincial Major Science and Technology Special Plan Projects (No.202103AA080015). + +# References + +Devansh Arpit, Stanisław Jastrzegski, Nicolas Ballas, David Krueger, Emmanuel Bengio, Maxinder S Kanwal, Tegan Maharaj, Asja Fischer, Aaron Courville, Yoshua Bengio, et al. 2017. A closer look at memorization in deep networks. In Proceedings of the 34th International Conference on Machine Learning. +Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data. +Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data. In Proceedings of Advances in Neural Information Processing Systems. + +Linlin Chao, Jianshan He, Taifeng Wang, and Wei Chu. 2021. *PairRE: Knowledge graph embeddings via paired relation vectors*. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics* and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). +Shuang Chen, Jinping Wang, Feng Jiang, and Chin-Yew Lin. 2018. Improving entity linking by modeling latent entity type information. In Proceedings of the AAAI Conference on Artificial Intelligence. +Xiangyu Dong, Wenhao Yu, Chenguang Zhu, and Meng Jiang. 2021. Injecting entity types into entity-guided text generation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. +Hailong Jin, Lei Hou, Juanzi Li, and Tiansi Dong. 2018. Attributed and predictive entity embedding for fine-grained entity typing in knowledge bases. In Proceedings of the 27th International Conference on Computational Linguistics. +Hailong Jin, Lei Hou, Juanzi Li, and Tiansi Dong. 2019. Fine-grained entity typing via hierarchical multi graph convolutional networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). +Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations. +Thomas N. Kipf and Max Welling. 2017. Semi-supervised classification with graph convolutional networks. In Proceedings of International Conference on Learning Representations. +Changchun Li, Ximing Li, Lei Feng, and Jihong Ouyang. 2021. Who is your right mixup partner in positive and unlabeled learning. In Proceedings of International Conference on Learning Representations. +Sheng Liu, Jonathan Niles-Weed, Narges Razavian, and Carlos Fernandez-Granda. 2020. Early-learning regularization prevents memorization of noisy labels. In Proceedings of Advances in Neural Information Processing Systems. +Changsung Moon, Paul Jones, and Nagiza F Samatova. 2017. Learning entity type embeddings for knowledge graph completion. In Proceedings of the 2017 ACM on conference on information and knowledge management. +Arvind Neelakantan and Ming-Wei Chang. 2015. Inferring missing entity type instances for knowledge base completion: New dataset and methods. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. + +Weiran Pan, Wei Wei, and Xian-Ling Mao. 2021. Context-aware entity typing in knowledge graphs. In Findings of the Association for Computational Linguistics: EMNLP 2021. +Michael Sejr Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. 2018. Modeling relational data with graph convolutional networks. In European semantic web conference. +Chao Shang, Yun Tang, Jing Huang, Jinbo Bi, Xiaodong He, and Bowen Zhou. 2019. End-to-end structure-aware convolutional networks for knowledge base completion. In Proceedings of the AAAI Conference on Artificial Intelligence. +Fabian M Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: a core of semantic knowledge. In Proceedings of the 16th international conference on World Wide Web. +Zhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, and Jian Tang. 2019. Rotate: Knowledge graph embedding by relational rotation in complex space. In Proceedings of International Conference on Learning Representations. +Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In Proceedings of The 33rd International Conference on Machine Learning. +Shikhar Vashishth, Rishabh Joshi, Sai Suman Prayaga, Chiranjib Bhattacharyya, and Partha Talukdar. 2018. Reside: Improving distantly-supervised neural relation extraction using side information. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. +Shikhar Vashishth, Soumya Sanyal, Vikram Nitin, and Partha Talukdar. 2020. Composition-based multi-relational graph convolutional networks. In International Conference on Learning Representations. +Petar Velicković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2018. Graph attention networks. In International Conference on Learning Representations. +Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph embedding by translating on hyperplanes. In Proceedings of the AAAI Conference on Artificial Intelligence. +Ruobing Xie, Zhiyuan Liu, and Maosong Sun. 2016. Representation learning of knowledge graphs with hierarchical types. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI-16). +Yu Zhao, Anxiang Zhang, Ruobing Xie, Kang Liu, and Xiaojie Wang. 2020. Connecting embeddings for knowledge graph entity typing. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. + +Yu Zhao, Han Zhou, Anxiang Zhang, Ruobing Xie, Qing Li, and Fuzhen Zhuang. 2022. Connecting embeddings based on multiplex relational graph attention networks for knowledge graph entity typing. IEEE Transactions on Knowledge and Data Engineering. +Ke Zhu and Jianxin Wu. 2021. Residual attention: A simple but effective method for multi-label recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision. +Jianhuan Zhuo, Qiannan Zhu, Yinliang Yue, Yuhong Zhao, and Weisi Han. 2022. A neighborhood-attention fine-grained entity typing for knowledge graph completion. In Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining. + +# A Statistics of Datasets + +The statistics of the FB15kET and YAGO43kET datasets are shown in the Table 5. + +
StatisticsFB15kETYAGO43kET
#Entity (|E|)14,95142,334
#Relation (|R|)1,34537
>Type (|T|)3,58445,182
#Tuple (|G|)483,142331,686
#Train136,618375,853
#Valid15,84843,111
#Test15,84743,119
+ +# B Hyper-parameter Settings + +Table 5: Statistics of FB15kET and YAGO43kET. + +
ParametersFB15kET SettingsYAGO43kET Settings
α{0.2, 0.3, 0.4, 0.5}{0.6, 0.7, 0.8, 0.9}
β1{0.5, 1.0, 1.5}{0.5, 1.0, 1.5}
β2{0.5, 1.0, 1.5}{0.5, 1.0, 1.5}
λ{0.3, 0.6, 0.9}{0.3, 0.6, 0.9}
h{2, 3, 4}{2, 3, 4}
H{2, 3, 4, 5, 6}{2, 3, 4, 5, 6}
γ{0.3, 0.5, 0.7}{0.3, 0.5, 0.7}
ω{0.5, 0.7, 0.9}{0.5, 0.7, 0.9}
μ2{1, 2, 3}{1, 2, 3}
+ +Table 6: The hyper-parameter settings of the FB15kET and YAGO43kET datasets. + +As shown in Table 6, $\alpha$ denotes the weight of skip connection, $\beta_{1}$ denotes the weight of type-specific local score, $\beta_{2}$ denotes the weight of type-agnostic global score, $\lambda$ denotes the weight of one-hop neighbors, $h$ denotes the number of hops, $H$ denotes the number of heads, $\gamma$ denotes the momentum, $\omega$ denotes the multiplication factor and $\mu_{2}$ denotes the weight of regularization. \ No newline at end of file diff --git a/agoodneighborafoundtreasureminingtreasuredneighborsforknowledgegraphentitytyping/images.zip b/agoodneighborafoundtreasureminingtreasuredneighborsforknowledgegraphentitytyping/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..931c4e48b5eca8121d5fe68156d541777fd9eb4b --- /dev/null +++ b/agoodneighborafoundtreasureminingtreasuredneighborsforknowledgegraphentitytyping/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f40335d4756a5f1123f56dba3ac7e66fd02f13255800e3d30157f73d83df4077 +size 529578 diff --git a/agoodneighborafoundtreasureminingtreasuredneighborsforknowledgegraphentitytyping/layout.json b/agoodneighborafoundtreasureminingtreasuredneighborsforknowledgegraphentitytyping/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..0ce5827f05a78fd22b8aff29f446809ff309a441 --- /dev/null +++ b/agoodneighborafoundtreasureminingtreasuredneighborsforknowledgegraphentitytyping/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d2665667d8ce12abacf4e260e5a48bd9934d29fa218976255bcee74f35e7a043 +size 400696 diff --git a/ajointlearningframeworkforrestaurantsurvivalpredictionandexplanation/af9bcf45-84a1-466a-b814-3d967bd86003_content_list.json b/ajointlearningframeworkforrestaurantsurvivalpredictionandexplanation/af9bcf45-84a1-466a-b814-3d967bd86003_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..b8d9c5678903f29e95f9c7a617614c0feca136e3 --- /dev/null +++ b/ajointlearningframeworkforrestaurantsurvivalpredictionandexplanation/af9bcf45-84a1-466a-b814-3d967bd86003_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fb1e0f4fb2ea7435f7fe7ace473f805396c22c4f96c75409c811e1ab17ee88c4 +size 92976 diff --git a/ajointlearningframeworkforrestaurantsurvivalpredictionandexplanation/af9bcf45-84a1-466a-b814-3d967bd86003_model.json b/ajointlearningframeworkforrestaurantsurvivalpredictionandexplanation/af9bcf45-84a1-466a-b814-3d967bd86003_model.json new file mode 100644 index 0000000000000000000000000000000000000000..771e508e90d65c931dab8bffc0cc60e016d5f003 --- /dev/null +++ b/ajointlearningframeworkforrestaurantsurvivalpredictionandexplanation/af9bcf45-84a1-466a-b814-3d967bd86003_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:271aa23b7970b15977e3afc4e8e2616da7e81e0f3d0766d3242ce0c7f20e27ef +size 115492 diff --git a/ajointlearningframeworkforrestaurantsurvivalpredictionandexplanation/af9bcf45-84a1-466a-b814-3d967bd86003_origin.pdf b/ajointlearningframeworkforrestaurantsurvivalpredictionandexplanation/af9bcf45-84a1-466a-b814-3d967bd86003_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..7f23a0a11268309e8bfa318a5034449b207caeb2 --- /dev/null +++ b/ajointlearningframeworkforrestaurantsurvivalpredictionandexplanation/af9bcf45-84a1-466a-b814-3d967bd86003_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0120b65c5064fcb4aafba01e01c9e1ecc9b2b09c4e93acdd034b5c98bee44386 +size 1594127 diff --git a/ajointlearningframeworkforrestaurantsurvivalpredictionandexplanation/full.md b/ajointlearningframeworkforrestaurantsurvivalpredictionandexplanation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..c6420eaa99c583d476f23c3adbffdcdaa4987d89 --- /dev/null +++ b/ajointlearningframeworkforrestaurantsurvivalpredictionandexplanation/full.md @@ -0,0 +1,423 @@ +# A Joint Learning Framework for Restaurant Survival Prediction and Explanation + +Xin Li $^{1}$ , Xiaojie Zhang $^{1}$ , Jiahao Peng $^{1}$ , Rui Mao $^{1}$ , Mingyang Zhou $^{1}$ , Xing Xie $^{2}$ , Hao Liao $^{1*}$ + +Shenzhen University, China + +Microsoft Research Asia2 + +{1910273046,1800271040,2070276145}@email.szu.edu.cn + +{mao, zmy, haoliao}@szu.edu.cn + +xing.xie@microsoft.com + +# Abstract + +The bloom of the Internet and the recent breakthroughs in deep learning techniques open a new door to AI for E-commence, with a trend evolved from using a few financial factors such as liquidity and profitability to using more advanced AI techniques to process complex and multi-modal data. In this paper, we tackle the practical problem of restaurant survival prediction. We argue that traditional methods ignore two essential aspects, which are very helpful for the task: 1) modeling customer reviews and 2) jointly considering status prediction and result explanation. Thus, we propose a novel joint learning framework for explainable restaurant survival prediction based on the multi-modal data of user-restaurant interactions and users' textual reviews. Moreover, we design a graph neural network to capture the high-order interactions and design a co-attention mechanism to capture the most informative and meaningful signal from noisy textual reviews. Our results on two datasets show a significant and consistent improvement over the SOTA techniques (average $6.8\%$ improvement in prediction and $45.3\%$ improvement in explanation). + +# 1 Introduction + +Business survival prediction is a hot topic in management and finance literature. Traditional methods rely heavily on financial factors to research(e.g., liquidity, solvency, and profitability)(Ziman, 1991; Lussier, 1996; Pereira et al., 2020). However, there are two significant drawbacks: 1) the financial factors of a shop/company are hard to obtain due to privacy issues; 2) meanwhile, financial factors are macro indicators that reveal the status of a business only on a coarse level. With the development of information techniques, much restaurant-related data can be collected online. For example, people can post check-ins after consuming in a restaurant, and they can share reviews to show how/why they like the restaurant via an online review platform, such + +as Yelp.com. Moreover, reviews contain informative users' feedback on a fine-grain level. More importantly, the feedback which deeply reflects the restaurant's operating status, can in turn help to generate explainable prediction reasons. Some recent research works also verify this, and the use of online reviews to understand business performance is an emerging trend(Babic Rosario et al., 2016)(Kong et al., 2017). + +Recent advances in deep learning have various models that research reviews and interactions for different kinds of tasks, such as recommendation (Wang et al., 2019), fake news detection (Potthast et al., 2018; Wang, 2017), rating prediction (Tay et al., 2018), but little attention has been paid to the application of restaurant survival analysis. In this paper, we propose a novel joint learning framework to tackle the challenging task of explainable restaurant survival prediction. Our model consists of two compulsory modules: the co-attention network for selecting valuable review texts and the graph neural network for learning high-order interactions on the user-restaurant graph. Specifically, the co-attention mechanism is used to select meaningful review text, which is a feature selection and learning process. The graph of user-item interactions could reveal the preference similarity between users (or items). Therefore, the construction of graph neural networks, on which we encode high-order relationships, can en- hance the representation of reputed users and high-quality restaurants by modeling the high-order interaction between user and restaurant, which is the key to our modeling exemplification. + +Merely predicting the future status of restaurant survival is inadequate. It is also critical for businesses to understand why they will prosper or close in the future. Fortunately, we can leverage NLP models to encode the massive user reviews and output some explanations, just like a document summarization process. To this end, we jointly train the survival prediction and explanation task. + +The prediction task predicts the future status of the restaurant, and the explanation task generates some explainable texts to provide an informative summarization for the restaurant's management. We named the unified framework Restaurant Survival Prediction and Explanation (RSPE). Through experiments, we find that RSPE significantly improves the performance of both tasks compared with several competitive baselines. + +The main contributions of our framework are as follows. + +1) We propose a new joint learning framework for predicting the survival of restaurants and generating summarizing texts through reviews and interactions. +2) We design two key components in RSPE, i.e., the co-attention component, which mines high-quality and informative reviews, and the graph representation component to encode high-order interactions on the user-restaurant graph. +3) We conduct extensive experiments on Dianping and Yelp datasets. Our model outperforms all SOTA methods significantly on both prediction and explanation tasks, with an average improvement of up to $6.8\%$ on the prediction task and $45.3\%$ on the explanation task. + +# 2 Related Work + +Restaurant Survival Analysis: Store survival analysis is an essential and practical research topic in the financial and marketing field, which offers deep insights into stores' financial affairs, marketing strategies, and management (Parsa et al., 2005; Kim and Gu, 2006; Liang et al., 2016; Du Jardin, 2017). Traditionally, researchers usually leverage restaurant financial factors to build linear forecasting models, which are sensitive and hard to obtain. With the development of online services, researchers find that User Generated Content (UGC), such as textual reviews from Yelp.com or Dianping.com, contains massive information covering diverse aspects of stores (restaurants in this paper). Leveraging the heterogeneous UGC can effectively improve the performance of restaurant survival prediction models (Lian et al., 2017). However, the main weaknesses of this group of methods are threefold: 1) They used traditional NLP models such as LDA, bag-of-words, or word2vec; 2) they did not consider the interaction graph between customers and restaurants; 3) they did not explicitly reduce the noisy information from the raw UGC. + +Pre-trained Model: The pre-trained model has been widely used in the field of NLP. It is trained on large-scale open-domain datasets with self-supervised learning tasks to encode common language knowledge into the model. The well-trained model can be fine-tuned with a small amount of labeled data to perform well on the given target task. For example, BERT (Devlin et al., 2019) is a multi-layer bidirectional Transformer encoder and uses Masked Language Model (MLM) and Next Sentence Prediction (NSP) to capture word and sentence-level representations. UniLM (Dong et al., 2019) is based on Bert, which achieved great success on NLP tasks such as unidirectional, bidirectional, and sequence-to-sequence prediction. Moreover, some studies (Qiu et al., 2020) have shown that the pre-trained model is capable of capturing hierarchy-sensitive and syntactic dependencies, which is beneficial to downstream NLP tasks. + +Graph Representation: Graph Neural Network (GNN) is a key component in our framework. GNNs represent a node by fusing self-information with neighborhood information on the graph in a message-passing manner. For example, LightGCN (He et al., 2020) simplifies the classical GCN (Kipf and Welling, 2017) and NGCF (Wang et al., 2019) by removing the transformation layer and non-linear activation functions, and uses a mean pooling aggregator to fuse the neighborhood information. It handles the homogeneous graph. The Heterogeneous Graph Neural Networks model (HetGNN) (Zhang et al., 2019) considers heterogeneous structural (graph) information as well as heterogeneous contents information of each node. Several investigations (Battiston et al., 2021) have already shown that the presence of higher-order interactions may substantially impact the dynamics of networked systems. Thus, we argue that it is necessary to encode high-order interactions from the user-restaurant graph to better model user preference and restaurant status, which existing literature ignores. + +# 3 Problem Statement + +We first introduce some definitions and notations, then introduce the problem formulation. + +User-Restaurant Interaction Graph: let $G = (U, V, E)$ represent the user-restaurant interaction graph, where $U = \{u_1, u_2, \ldots, u_n\}$ denotes the set of users, and $V = \{v_1, v_2, \ldots, v_m\}$ is a set of restaurants. $E = \{(u, v) | u \in U, v \in V\}$ denotes the + +set of edges, where an edge $(u,v)\in E$ means that user $u$ has reviewed restaurant $v$ . + +Reviews: the reviews of a restaurant are defined as $(R_{v,l_1}^{(V)},\dots,R_{v,l_v}^{(V)})$ , where $R_{v,i}^{(V)}$ represents the $i$ -th review of restaurant $v$ and $l_v$ is the number of reviews of restaurant $v$ . Similarly, the reviews of the user $u$ are defined as $(R_{u,l_1}^{(U)},\dots,R_{u,l_u}^{(U)})$ . We further use $U_v = (u_1,u_2\dots)$ to denote the list of users who have reviewed restaurant $v$ . The reviews of users related to restaurant $v$ can be defined as $(R_{u_1,1}^{(U)},\dots R_{u_n,l_v}^{(U)})$ , $u_i\in U_v$ , where $R_{u_i,j}^{(V)}$ representing the $j$ -th review comes from the reviews of user $u_i$ . + +Prediction Task: to predict the future status of the restaurant. This is a binary classification task, and 0 means the restaurant will be shut down and 1 means normal operation. + +Explanation Task: besides the binary prediction task, the model also contains an explanation task, in which a sentence of summarization text $Y = (w_{1},\ldots ,w_{T})$ will be generated. We invited 30 evaluators who are split into two groups to manually select a few sentences (about 30 words) from all the restaurant reviews to represent the key reasons for each restaurant's business prosperity, which will be used as ground-truth for training and evaluation. + +Problem Formulation: with the interaction graph $G$ and review collections $R^{(U)}$ and $R^{(V)}$ as input data, we want to make predictions for a given restaurant regarding its future status and meanwhile generate an explaining text. + +# 4 The Proposed Model + +In this section, we introduce our model, a joint learning framework for restaurant survival prediction and explanation, which is illustrated in Figure 1. There are four components in RSPE, including an input module, a co-attention module, a graph representation module and a joint learning module. We will introduce the details in the following sections. + +# 4.1 Input Module + +The function of the input module is to encode the input feature, and the input includes two types of sequences: the reviews of restaurants $(R_{v,1}^{(V)},\dots,R_{v,l}^{(V)})$ , and related users' reviews $(R_{u_1,1}^{(U)},\dots,R_{u_n,l}^{(U)})$ , $u_i\in U_v$ . Each sequence includes a list of reviews. This module encodes reviews to embedding representations. Each review is composed of a sequence of sentences. We use UniLM + +(Dong et al., 2019) to transform each sentence into a $d$ -dimensional embedding representation $z \in \mathbb{R}^d$ , because UniLM is pre-trained on a large-scale unsupervised dataset and through our experiments, we find that it is better than BERT. Given a review $R_{v,i}^{(V)}$ , its embedding vector $\mathbf{r}_{u,i} = \sum_{z \in \mathbb{R}_{u,i}} z$ is represented by the average of sentence embeddings in the review. In addition, we use an embedding-lookup operation to get a trainable embedding vector representation for each user and restaurant from her/its ID, which will be used as the input for the graph representation module (will be introduced in Section 4.3). + +# 4.2 Co-attention Module + +The intuition is simple but powerful. Each user is represented by all reviews that he/she wrote, and the restaurant is represented by all reviews belonging to it. The goal of the co-attention module is to select high-quality reviews from the user/restaurant's review collection and finally merge reviews' embedding into one user/restaurant embedding. + +Affinity Matrix: given user review embedding $\pmb{a}_i(a_i \in \mathbb{R}^{l \times d})$ and restaurant review embedding $\pmb{b}_j(b_j \in \mathbb{R}^{l \times d})$ , the affinity matrix is calculated by: + +$$ +\boldsymbol {M} _ {i, j} = f \left(\boldsymbol {a} _ {i}\right) ^ {\mathrm {T}} \boldsymbol {A} f \left(\boldsymbol {b} _ {j}\right), \tag {1} +$$ + +where $\mathbf{A} \in \mathbb{R}^{d \times d}$ is the weight matrix, and $f(.)$ is a feed-forward neural network. + +Max Pooling Function: we use arg max to obtain the maximum value of each row and each column of the matrix, then weigh the review $\mathbf{a}_i$ and $\mathbf{b}_j$ respectively. The calculation process is as follows: + +$$ +\boldsymbol {\zeta} _ {i} = \left(G u m b e l \left(\max _ {c o l} (\boldsymbol {M})\right)\right) ^ {\top} \boldsymbol {a} _ {i}, \tag {2} +$$ + +$$ +\boldsymbol {\eta} _ {j} = \left(G u m b e l \left(\max _ {r o w} (\boldsymbol {M})\right)\right) ^ {\top} \boldsymbol {b} _ {j}, \tag {3} +$$ + +where $\zeta_{i}$ and $\eta_{j}$ represent the co-attention embeddings of the user and the restaurant. Gumbel() is Straight-Through Gumbel softmax (Jang et al., 2017), due to the arg max function is not differentiable, we use Gumbel() to return a discrete vector and turn the unnormalized vectors $e = (e_1,e_2,\dots ,e_d)$ into a probability distribution: + +$$ +\boldsymbol {s} _ {\boldsymbol {i}} = \frac {\exp \left(\frac {\boldsymbol {e} _ {\boldsymbol {i}} + \boldsymbol {g} _ {\boldsymbol {i}}}{\tau}\right)}{\sum_ {j = 1} ^ {d} \exp \left(\frac {\boldsymbol {e} _ {\boldsymbol {j}} + \boldsymbol {g} _ {\boldsymbol {i}}}{\tau}\right)}, \tag {4} +$$ + +where $\tau$ is a temperature parameter, and $g_{i}$ is a Gumbel noise. In the feedforward process, $s_j$ will + +![](images/08c46a3cb1c4668d2ebade1b4898dc61c35c22abfb53fc9aaba66ce02e1211b5.jpg) +Figure 1: An overview of the RSPE framework. + +be transformed into a one-hot vector $\pmb{k}_i$ , we denote this function as $Gumbel(\pmb{s}) = \pmb{k}$ : + +$$ +\boldsymbol {k} _ {i} = \left\{ \begin{array}{l l} 1, & i = \arg \max _ {j} \left(\boldsymbol {s} _ {j}\right) \\ 0, & \text {o t h e r w i s e} \end{array} \right. \tag {5} +$$ + +# 4.3 Graph Representation Module + +Restaurants are not isolated. Sometimes we cannot understand why a restaurant becomes so popular if we only consider its review content. Many factors influence the business status of a restaurant, such as nearby competitors and the general social trend. In order to model the global context of restaurants, we construct a bipartite graph, on which the nodes are restaurants and users, and the edges are user-restaurant interactions. Since GNNs have demonstrated great superiority in learning useful information from graph-structure data (Velicković et al., 2018; Hamilton, 2020), in this section, we introduce a graph representation module to learn meaningful patterns on the user-restaurant interaction graph, so that restaurants' information are enhanced by their neighborhood. + +The interaction graph $G$ is illustrated in Figure 1. It stems from the idea that a specific interaction between the user and the restaurant can reveal the restaurant's survival. + +Node Embedding: we obtain the trainable embedding vector by the user ID and the restaurant ID, denoted by $\pmb{p}_u^{(0)}$ and $\pmb{q}_v^{(0)}$ respectively. + +High-order Neighbor Aggregation: the neighbour nodes embedded in the propagation layer of different orders have different effects on the target node. By stacking multiple propagation layers, we can explore high-level connectivity information and enhance the representation. According to the propagation rules, we obtain the neighbour nodes of the first-order, second-order, and third-order propagation layers adjacent to the target node, and the propagation layer embedding is calculated as follows: + +$$ +\boldsymbol {p} _ {u} ^ {(t + 1)} = \sum_ {v \in S _ {u}} \frac {1}{\sqrt {\left| S _ {u} \right|} \sqrt {\left| S _ {v} \right|}} \boldsymbol {p} _ {v} ^ {(t)}, \tag {6} +$$ + +$$ +\boldsymbol {p} _ {v} ^ {(t + 1)} = \sum_ {u \in S _ {v}} \frac {1}{\sqrt {| S _ {v} |} \sqrt {| S _ {u} |}} \boldsymbol {p} _ {u} ^ {(t)}, \tag {7} +$$ + +where $\pmb{p}_u^{(t)}$ and $\pmb{p}_v^{(t)}$ represent the embeddings of user $u$ and restaurant $v$ after $t^{th}$ layer propagation respectively, $S_{u}$ and $S_{v}$ represent the first-hop neighbors of user $u$ and restaurant $v$ . + +To avoid the large embedding scale, each layer of convolution nodes needs to be regularized. Then, the obtained propagation embedding layer is aggregated to obtain the final target node embedding. The calculation process is as follows: + +$$ +\boldsymbol {p} _ {u} = \sum_ {t = 0} ^ {T} \boldsymbol {\alpha} _ {t} \boldsymbol {p} _ {u} ^ {(t)}, \boldsymbol {p} _ {v} = \sum_ {k = 0} ^ {T} \boldsymbol {\alpha} _ {t} \boldsymbol {p} _ {v} ^ {(t)}, \tag {8} +$$ + +where $\alpha_{t}$ represents the weight of the $T^{th}$ ( $T = 0,1,2,3$ ) layer embedding. + +For each restaurant $v$ , there will be many user reviews. Therefore, we use a mean pooling to aggregate the vector representations of $u \in S_v$ of all users who have reviewed restaurant $v$ , which is expressed as follows: + +$$ +\boldsymbol {p} _ {S _ {v}} = \sum_ {u \in S _ {v}} \boldsymbol {p} _ {u} \tag {9} +$$ + +# 4.4 Joint Learning Module + +Joint learning is an inductive transfer method to improve generalization by using the domain information in the training signals of related tasks as an inductive bias. Since the prediction and explanation tasks are associated, we jointly train them in a unified framework to make a better-generalized performance. + +We aggregate the embeddings of users and restaurants in the co-attention module and graph representation module. The formula is as follows: + +$$ +\boldsymbol {q} _ {u} = \boldsymbol {\zeta} _ {i} + \boldsymbol {p} _ {S _ {v}}, \boldsymbol {q} _ {v} = \boldsymbol {\eta} _ {j} + \boldsymbol {p} _ {v}. \tag {10} +$$ + +Prediction Task: the factorization machine (Rendle, 2010) helps extract the most essential latent or hidden features, which can solve the classification problem. The formula is as follows: + +$$ +f (\boldsymbol {q}) = \boldsymbol {b} + \sum_ {i = 1} ^ {n} \boldsymbol {w} _ {i} \boldsymbol {q} _ {i} + \sum_ {i = 1} ^ {n} \sum_ {j = i + 1} ^ {n} \left\langle \boldsymbol {h} _ {i}, \boldsymbol {h} _ {j} \right\rangle \boldsymbol {q} _ {i} \boldsymbol {q} _ {j}, \tag {11} +$$ + +where $\pmb{q}_i \in \mathbb{R}^d$ is the $i^{th}$ entry of $\pmb{q} = [\pmb{q}_u, \pmb{q}_v]$ , $\pmb{b} \in \mathbb{R}^d$ is the bias, $\pmb{w}_i \in \mathbb{R}^d$ and $\pmb{h}_i \in \mathbb{R}^k$ are parameters to be learned. The loss function uses sigmoid cross entropy: + +$$ +L _ {p} = \frac {1}{2 | \boldsymbol {\Theta} |} \sum_ {(u, v) \in \boldsymbol {\Theta}} (- [ \boldsymbol {y} \log \hat {\boldsymbol {y}} + (1 - \boldsymbol {y}) \log (1 - \hat {\boldsymbol {y}}) ]), \tag {12} +$$ + +where $\pmb{y}$ is truth label and $\Theta$ represents the training set. + +Explanation Task: since the Gated Recurrent Unit(GRU) (Cho et al., 2014) performs well in the generation, we choose it for the explanation task. The details of GRU are as follows. First, calculate the initial hidden state $h_0$ : + +$$ +\boldsymbol {h} _ {0} = \tanh \left(\boldsymbol {w} ^ {1} \boldsymbol {q} _ {u} + \boldsymbol {w} ^ {2} \boldsymbol {q} _ {v} + \boldsymbol {w} ^ {3} \hat {\boldsymbol {y}} + \boldsymbol {b} _ {e}\right), \tag {13} +$$ + +where $\pmb{w}^1$ , $\pmb{w}^2$ and $\pmb{w}^3$ are parameters to learned. $\pmb{b}_e$ is the bias. + +The current $t$ state is related to the last $t - 1$ state: + +$$ +\boldsymbol {h} _ {t} = G R U \left(\boldsymbol {h} _ {t - 1}, \boldsymbol {w} _ {t}\right), \tag {14} +$$ + +where $\pmb{w}_t$ is the word generated at time $t$ . + +The final output layer generates the distribution $\mathbf{d}_t$ of words from the hidden state at time $t$ : + +$$ +\boldsymbol {\eta} _ {t} = O \left(\boldsymbol {w} ^ {4} \boldsymbol {h} _ {t - 1} + \boldsymbol {b} _ {r}\right), \tag {15} +$$ + +where $\pmb{w}^4$ are parameters to learned and $\pmb{b}_r \in \mathbb{R}^{|\mathcal{V}| \times l_d}$ is the bias. $|\mathcal{V}|$ is the vocabulary size and $O()$ is the softmax function. Then, we use beam search to select the best text generated $\pmb{Y}$ . + +We expect to maximize the probability of the ground-truth text. Thus, the loss function for the explanation task is: + +$$ +L _ {g} = \frac {1}{| \boldsymbol {\Theta} |} \sum_ {(u, v) \in \boldsymbol {\Theta}} \sum_ {t = 1} ^ {T} \left(- \log \boldsymbol {\eta} _ {t, \hat {l} _ {t}}\right), \tag {16} +$$ + +where $\hat{l}_t$ is the word of ground-truth text at time $t$ . + +Multi-task Loss: by sharing the representations between related tasks, we aggregate the three loss functions of the two tasks for optimization: + +$$ +\mathcal {L} = \lambda_ {1} L _ {p} + \lambda_ {2} L _ {g} + \lambda_ {3} \| \boldsymbol {\Psi} \| _ {2} ^ {2}, \tag {17} +$$ + +where $\lambda_{\xi}(\xi = 1,2,3)$ are hyper-parameters that control the weight of different loss functions. $\Psi$ denotes the set of trainable parameters. For more details on the setting of hyper-parameters, please refer to the appendix. + +# 5 Experiments + +# 5.1 Datasets + +We experiment with two public datasets, the basic statistics are listed in Table 1: + +Dianping1: it is the largest consumer review site in China. This dataset records reviews from Jan.2011 to Dec.2011 and restaurants' status in Dec.2011 as the binary label. In the Dianping dataset, the top 3 popular cities are used in the experiments: Shanghai (SH), Beijing (BJ) and Guangzhou (GZ). Yelp2: it is the largest review site for business. We use the latest restaurant records reviews from Jan.2019 to Dec.2019 and restaurants' status in Dec.2019 as the binary label. In the Yelp dataset, the top 3 popular states are Nevada (NV), Arizona (AZ) in the United States, and Ontario (ON) in Canada. + +Due to space limitations, for more details on data processing, please refer to the appendix. + +Table 1: Statistics of clean datasets in the experiments from Dianping and Yelp + +
dataset#res-taurant#closure restaurant#closure ratio
Dian PingSH10251331232.31%
BJ5067130825.81%
GZ193250926.34%
YelpNV47642234.68%
AZ66232583.90%
ON56882093.67%
+ +# 5.2 Metrics + +In our experiments, we use AUC (Hanley and McNeil, 1982) to evaluate the prediction task. BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004) are used to evaluate the explanation task. ROUGE's evaluation is based on the co-occurrence information of n-grams in the text. ROUGE-N $(N = 1,2)$ mainly counts on the N-grams. ROUGE-L is calculated by matching the longest common subsequence. ROUGE-SU4 is calculated by the skipgram strategy, when generating explanation text and ground-truth text for matching, which does not require that the words must be continuous, and several words could be "skipped". The larger value of BLEU and ROUGE indicates better explainability. + +# 5.3 Performance Evaluation + +To evaluate the prediction task, we compare the RSPE with two groups of baselines which perform binary classification tasks as our prediction module: + +Traditional Machine Learning: we take the heterogeneous information obtained by encoding reviews through Word2Vec (Church, 2017) and Bag of Word as input features for traditional machine learning methods, including: 1) LR (Cortes and Vapnik, 1995). 2) SVM (Cortes and Vapnik, 1995). 3) GBDT (Friedman, 2001). + +Deep Learning: we also compare with several competitive deep learning based methods, including: 1) text-CNN (Kim, 2014): a modified convolutional neural networks model. 2) text-RNN (Lai et al., 2015): a modified long short-term memory model. 3) MPCN (Tay et al., 2018): a review-based attention network model that combines multipointer for recommendations. 4) HetGNN (Zhang et al., 2019): a heterogeneous graph neural network for various graph mining tasks by aggregating different types of nodes. 5) DCA (Liao et al., 2020): a review based attention neural model for data augmentation by selecting concepts. 6) HGAT (Li et al., 2020): a hierarchical graph attention network to accomplish the semi-supervised node classifica + +Table 2: Results of the prediction task + +
MethodDianpingYelp
SHBJGZNVAZON
LR0.72030.70810.71030.58120.67470.6111
SVM0.70970.70490.65180.53910.60920.561
GBDT0.5730.60030.59320.6450.71350.653
text-CNN0.56470.55370.56040.58960.55580.5414
text-RNN0.56830.56560.56750.55350.53170.5456
MPCN0.55730.67830.67820.69720.76720.7563
HetGNN0.51070.51650.4990.62340.67110.6122
HGAT0.77530.74630.75980.77180.75820.7825
DCA0.84120.86120.83790.90140.87520.8856
RSPE0.89940.90730.90960.95210.91710.9379
Improvement6.91%5.35%8.56%5.63%4.80%5.91%
+ +tion tasks. + +To evaluate the explanation task, we compare RSPE with two groups of baselines that both perform well on text generation. + +Generative-based Methods: NRT (Li et al., 2017) is a framework based on user review information, which generates abstractive text with good linguistic quality for prediction explanation. DCA (Liao et al., 2020) is a framework based on attention neural, which generates diverse texts through a large amount of text learning. PETER (Li et al., 2021): a personalized Transformer that shows good performance in text generation tasks. + +Retrieval-based Method: the retrieval method selects the most important text from reviews as explanation sentence. Lexrank (Erkan and Radev, 2004) is an unsupervised text summarization method based on graph-based lexical centrality, which generates summary text by reviews. + +# 5.4 Implementation Details + +In our experiments, we randomize the datasets into a training set (70%), validation set (15%), and test set (15%). We follow the corresponding papers to adjust the baselines to ensure the best results. The hyperparameter settings and implementation details are listed in the appendix. + +# 5.5 Results on the Prediction Task + +The overall prediction results are shown in Table 2. Our model's improvement over the best baseline is quite significant. For example, a performance gain up to $6.9\% /5.4\% /8.6\%$ on the Dianping dataset of city SH/BJ/GZ, and $4.8\% /9.5\% /5.9\%$ on the Yelp dataset on state AZ/NV/ON, which demonstrates the effectiveness of our model. + +In addition, we have the following 4 observations about the results. First, the MPCN, DCA, and HGAT are generally better than the traditional methods. Those methods use an attention mechanism to build their model. HGAT also considers heterogeneous graph convolution, demonstrating + +Table 3: Results of the explanation task + +
MethodBLEUROUGE-1ROUGE-2ROUGE-LROUGE-SU4
Dianping-SH
NRT1.342.70.782.670.94
LexRank1.494.851.044.831.14
DCA1.506.071.655.171.76
PETER0.975.721.094.891.45
RSPE2.197.151.797.121.98
Improvement46.0%17.8%8.5%37.7%12.5%
Dianping-BJ
NRT1.192.390.622.3930.127
LexRank1.654.390.984.390.98
DCA1.774.981.204.931.28
PETER1.044.531.224.581.35
RSPE2.906.871.946.852.02
Improvement63.2%38.0%59.0%38.9%57.7%
Dianping-GZ
NRT1.315.460.570.5460.78
LexRank1.745.280.905.270.86
DCA1.797.682.557.422.66
PETER1.0312.000.5912.431.25
RSPE3.2813.233.7413.073.85
Improvement83.2%10.3%46.7%5.1%43.8%
Yelp-NV
NRT1.3121.088.8616.4510.94
LexRank1.3315.953.4512.35.21
DCA2.0928.5112.0823.2113.53
PETER1.0617.692.729.077.39
RSPE2.6630.1513.4824.2014.69
Improvement27.4%5.7%11.5%4.2%8.6%
Yelp-AZ
NRT1.6421.678.6016.5710.90
LexRank1.6919.355.3115.087.01
DCA2.5430.0413.4524.1215.02
PETER1.3614.020.199.3310.4
RSPE3.1532.5015.5826.6617.25
Improvement24.2%8.2%15.8%10.5%14.8%
Yelp-ON
NRT1.2924.512.4219.6314.53
LexRank1.116.174.2112.745.63
DCA1.6727.3610.2020.3112.31
PETER0.9423.192.1213.19.87
RSPE2.1331.0114.7424.8316.43
Improvement27.7%13.3%44.5%22.2%33.4%
+ +that the information of the heterogeneous graph and attention mechanism may contribute to the model performance. Second, a simple graph structure cannot perform well in the prediction task, such as HetGNN. Third, our model performs well on Danping and Yelp datasets, demonstrating that our model is robust across different datasets. Fourth, our model achieves better performance. Our model can not only automatically dig important information in massive reviews through the co-attention module but also combine the interaction information between users and restaurants to capture the most informative and meaningful signal from noisy textual reviews. + +# 5.6 Results on the Explanation Task + +The detailed results are shown in Table 3. First, the performance of our model on the explanation task is significantly better than the SOTA methods. Take the BLEU metric as an example, RSPE achieves an improvement of $46.0\% / 63.2\% / 83.2\% / 27.4\% / 24.2\% / 27.7\%$ in SH/BJ/GZ/NV/AZ/ON, and the average improvement of $45.3\%$ . The ROUGE (ROUGE-1/2/L/SU4) indicator mainly considers overall accuracy, RSPE achieves an improvement of $49.1\%$ in BJ city, and the average improvement in all datasets is as high as $23.8\%$ . Second, in 6 cities/states, NRT's expla + +Table 4: Explanations generated by RSPE and Baseline. + +
Case 1Delicious! The customer service is pretty good and the open all the way to 3 am In the morning. The prime burgers are excellent!
LexrankThe customer service is pretty good!
NRTBest!
DCAThe customer service is pretty good!
RSPEDelicious! The customer service is pretty good! The only issue was the front of the best! It ’s a lot of what is some of the best.
Case 2The environment is not good, the service is not good, and the main dishes are terrible. After several times of food, the boss has always been very disdainful. Noodles with soybean paste is much more expensive than before, it is far from before. Anyway, I won’t go again...
LexrankThe taste is OK, the environment is just so-so, noodles with soybean paste is much more expensive than before.
NRTThe taste is good, the environment is bad, and the service is not good.
DCAThe taste is good, the environment is bad, need to line up and wait every time, the price is much higher than before.
RSPEThe taste is good, the environment is not good, the service is not good, and the dishes are poor. The price is much higher than before, in short, it is not recommended.
Case 3The ostentation is huge, and the dining environment is also good. Unfortunately, the most important food was terrible. The ingredients were not fresh, and the taste was not good enough. I would not care about it any more.
LexrankThe environment is good, the service is good.
NRTThe environment is good, the dishes are not good.
DCAhe environment is good, the service is good, but the food is too bad. The ingredients were not fresh, that’s too bed.
RSPEThe ostentation and environment are very good, and the service is also very good, but the food is too bad. The ingredients are not fresh, so I won’t go there any more.
+ +nation performance is not good because it is based on historical records to learn the latent factors and can only output some general-purpose expressions. Third, the retrieval method Lexrank does not perform well because it focuses on similarity matching while lacking personalized expression. Because the framework of DCA is too complex, its feature selection ability is insufficient, so the overall performance is lower than our model. Although PETER proposes a new Transformer structure to generate text, the results show that its performance improvement is modest. At last, our RSPE performs significantly better in the text of both Chinese and English datasets because we leverage a graph convolutional neural network to enhance hidden collaborative signals modeling from the user-restaurant interaction, which enables the model to learn the reputed reviews to improve the quality of explanation text. This observation is in line with the results mentioned above. It further verifies that by including graph structure in the modeling process, our model can learn the interaction information between user and restaurant and thus generate informative textual expressions for the restaurant survival. + +![](images/7c83b1a16b41eeb49e097b1e94fbfa7049adbe7f9d3ad9f0f6c57d09bc0f298b.jpg) +Figure 2: RSPE ablation analysis in AUC and BLEU + +Table 5: Results on the fluency evaluation. + +
MeasuresNRTLexRankDCAOur
Fluency (Kappa)2.98 (0.76)3.24 (0.73)3.46 (0.74)3.75 (0.8)
+ +# 5.7 Ablation Analysis + +In order to study the effectiveness of joint learning in the model, we performed an independent task experiment for prediction and explanation respectively, denoted as "Prediction Only" and "Explanation Only". Additionally, we remove the graph representation module from the model, denoted as "RSPE-G". The results are shown in Figure 2. It has been proved that independent tasks can achieve better results, but RSPE could achieve a balance between prediction accuracy and interpretation ability through the joint learning framework. In addition, it is clear that the graph representation module indeed plays a significant contribution. This proves again that the graph of high-order interaction enhances the power to capture the most informative and meaningful signal from noisy textual reviews, thus more accurate prediction and more reasonable explanations. + +# 5.8 Case Analysis + +We take three cases generated from LexRank, NRT, DCA, and RSPE as examples, which are shown in Table 4. We bold the frequent adjective and nouns in the reviews as keywords, and the cases of Di-anping are transformed from Chinese to English. This table shows that: 1) The explanation words generated by RSPE are more comprehensive and cover many important factors such as environment, service, taste, and price. Meanwhile, the generated content is highly consistent with the ground-truth text. 2) RSPE has a powerful generalizing ability to summarize relevant sentences, such as The ostentation and environment are very good in Case 3. 3) RSPE can generate personalized language expressions, such as The only issue was the front of the best in Case 1 and in short, it is not recommended in Case 2. + +![](images/6af6dd624ae5f980bae818e7eb8d75d0f963dafbc7c80298a339d1dccdbe1fb0.jpg) + +![](images/67eb654f7188fc969111877036e3aa7cf97079aae03783b8b14fc9b9ca5ee806.jpg) +Figure 3: Review word cloud and failure rate statistics. + +# 5.9 Fluency Evaluation + +Next, we evaluate the model's usefulness in improving the fluency of the generated sentences. The fluency evaluation experiment is done by human judgment. We randomly selected 100 samples and invited 5 annotators to assign scores. Five points mean very satisfied, and 1 point means very bad. The human evaluation results are reported in Table 5. Results demonstrate that our model outperforms the other three methods on Fluency and Kappa (Li et al., 2019) metrics. + +# 5.10 Survival Discussions + +A restaurant's survival is not only related to user reviews but also affected by many off-site factors, such as financial breakdown and competitive pressure. Therefore, we hope to mine some instructive explanations for the sustainable development of the restaurant industry through some data analysis. + +As shown in Figure 3 (a) and (c), users of Dianping pay more attention to taste (taste, good, fresh). In contrast, users of Yelp are more concerned about the environment and service (service, place, way, location). As shown in Figure 3 (c) and (d), the per capita consumption of medium cities is generally higher than that of big and small cities, and the failure rate of restaurants in small cities is much lower than that in big cities. We found that we could explore the restaurant's survival from a more fine-grained perspective, which to mine the rules, and helped adjust their strategies to promote business. + +# 6 Conclusion + +In this paper, we tackle the problem of restaurant survival, which is an essential task for social good. Unlike traditional methods, which highly rely on sensitive financial indicators, we use deep learning techniques to mine useful signals from massive + +UGC. We are the first to conduct both future status prediction and explanation simultaneously as a joint framework. Our model has two key components, i.e., the graph representation module and the co-attention module. We conduct extensive experiments on two datasets. Results demonstrate that our proposed model achieves the SOTA performance on both prediction and explanation tasks. + +# 7 Limitations + +Current limitations of this paper are threefold. First, a limited set of features are used in this paper. Whether a restaurant can survive is influenced by many factors, such as finances, social circumstances (such as Covid-19), and other issues that can exacerbate a restaurant's survival. In this paper, we can't fully explain those additional factors out of the review text. We just took a new perspective on the restaurant survival prediction task from NLP. Second, the model structure is not lightweight enough, and there is still room for model simplification, such as the combination of attention mechanisms and graph neural networks. Third, the data application scope of the model is not large enough. Currently, only two datasets have been tested in 6 cities/states. We do not test the model on data samples on more different online service platforms. + +# 8 Acknowledgments + +Hao Liao is the corresponding author. Thanks a lot for Dr. Jianxun Lian and Dr. Xiting Wang's valuable suggestions and help. This work was supported by the Natural Science Foundation of China under Grant no. 62276171 and 62072311, the Natural Science Foundation of Guangdong Province of China under Grant Nos. 2019A1515011173 and 2019A1515011064, the Shenzhen Fundamental Research-General Project under Grant No. JCYJ20190808162601658, CCF-Baidu Open Fund, NSF-SZU and Tencent-SZU fund. + +# References + +Ana Babić Rosario, Francesca Sotgiu, Kristine De Valck, and Tammo HA Bijmolt. 2016. The effect of electronic word of mouth on sales: A meta-analytic review of platform, product, and metric factors. Journal of Marketing Research, 53(3):297-318. +Federico Battiston, Enrico Amico, Alain Barrat, Ginestra Bianconi, Guilherme Ferraz de Arruda, Benedetta + +Franceschiello, Iacopo Iacopini, Sonia Kefi, Vito Latora, Yamir Moreno, et al. 2021. The physics of higher-order interactions in complex systems. Nature Physics, 17(10):1093-1098. +Kyunghyun Cho, Bart van Merrienboer, Caglar Gul-cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnN encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1724-1734. +Kenneth Ward Church. 2017. Word2vec. Natural Language Engineering, 23(1):155-162. +Corinna Cortes and Vladimir Vapnik. 1995. Support-vector networks. Machine learning, 20(3):273-297. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186. +Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understanding and generation. In Proceedings of the 33rd International Conference on Neural Information Processing Systems, pages 13063-13075. +Philippe Du Jardin. 2017. Dynamics of firm financial evolution and bankruptcy prediction. *Expert Systems with Applications*, 75:25-43. +Günes Erkan and Dragomir R Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summarization. Journal of artificial intelligence research, 22:457-479. +Jerome H Friedman. 2001. Greedy function approximation: a gradient boosting machine. Annals of statistics, pages 1189-1232. +William L Hamilton. 2020. Graph representation learning. Synthesis Lectures on Artificial Intelligence and Machine Learning, 14(3):1-159. +James A Hanley and Barbara J McNeil. 1982. The meaning and use of the area under a receiver operating characteristic (roc) curve. Radiology, 143(1):29-36. +Xiangnan He, Kuan Deng, Xiang Wang, Yan Li, Yong-dong Zhang, and Meng Wang. 2020. Lightgcn: Simplifying and powering graph convolution network for recommendation. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 639-648. + +Eric Jang, Shixiang Gu, and Ben Poole. 2017. Categorical reparameterization with gumbel-softmax. In International Conference on Learning Representations(Poster), page 4. +Hyunjoon Kim and Zheng Gu. 2006. A logistic regression analysis for predicting bankruptcy in the hospitality industry. The Journal of Hospitality Financial Management, 14(1):17-34. +Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1746-1751. +Thomas N. Kipf and Max Welling. 2017. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations, pages 1-14. +Grace Kong, Jennifer Unger, Lourdes Baezconde-Garbanati, and Steve Sussman. 2017. The associations between yelp online reviews and vape shops closing or remaining open one year later. Tobacco prevention & cessation, 2(Suppl). +Siwei Lai, Liheng Xu, Kang Liu, and Jun Zhao. 2015. Recurrent convolutional neural networks for text classification. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 29, pages 2267-2273. +Junyi Li, Wayne Xin Zhao, Ji-Rong Wen, and Yang Song. 2019. Generating long and informative reviews with aspect-aware coarse-to-fine decoding. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1969-1979. +Kangjie Li, Yixiong Feng, Yicong Gao, and Jian Qiu. 2020. Hierarchical graph attention networks for semi-supervised node classification. Applied Intelligence, 50(10):3441-3451. +Lei Li, Yongfeng Zhang, and Li Chen. 2021. Personalized transformer for explainable recommendation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4947-4957. +Piji Li, Zihao Wang, Zhaochun Ren, Lidong Bing, and Wai Lam. 2017. Neural rating regression with abstractive tips generation for recommendation. In Proceedings of the 40th International ACM SIGIR conference on Research and Development in Information Retrieval, pages 345-354. +Jianxun Lian, Fuzheng Zhang, Xing Xie, and Guangzhong Sun. 2017. Restaurant survival analysis with heterogeneous information. In Proceedings of the 26th International Conference on World Wide Web Companion, pages 993-1002. + +Deron Liang, Chia-Chi Lu, Chih-Fong Tsai, and Guan-An Shih. 2016. Financial ratios and corporate governance indicators in bankruptcy prediction: A comprehensive study. European Journal of Operational Research, 252(2):561-572. +Hao Liao, Xiaojie Zhang, Xin Li, Mingyang Zhou, Alexandre Vidmer, and Rui Mao. 2020. A deep concept-aware model for predicting and explaining restaurant future status. In 2020 IEEE International Conference on Web Services, pages 559-567. +Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Workshop on Text Summarization Branches Out, Post-Conference Workshop of ACL 2004, pages 74-81. +Robert N Lussier. 1996. A startup business success versus failure prediction model for the retail industry. The Mid-Atlantic Journal of Business, 32(2):79. +Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311-318. +HG Parsa, John T Self, David Njite, and Tiffany King. 2005. Why restaurants fail. Cornell Hotel and Restaurant Administration Quarterly, 46(3):304-322. +Jose Manuel Pereira, Humberto Ribeiro, Amélia Silva, and Sandra Raquel Alves. 2020. To Fail or Not to Fail: An Algorithm for SME Survival Prediction Using Accounting Data. Springer International Publishing, Cham. +Martin Potthast, Johannes Kiesel, Kevin Reinartz, Janek Bevendorff, and Benno Stein. 2018. A stylometric inquiry into hyperpartisan and fake news. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 231-240. +Xipeng Qiu, Tianxiang Sun, Yige Xu, Yunfan Shao, Ning Dai, and Xuanjing Huang. 2020. Pre-trained models for natural language processing: A survey. Science China Technological Sciences, 63:1872-1897. +Steffen Rendle. 2010. Factorization machines. In 2010 IEEE International Conference on Data Mining, pages 995-1000. +Yi Tay, Anh Tuan Luu, and Siu Cheung Hui. 2018. Multi-pointer co-attention networks for recommendation. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 2309-2318. +Petar Velicković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2018. Graph Attention Networks. International Conference on Learning Representations, pages 1-12. + +William Yang Wang. 2017. "liar, liar pants on fire": A new benchmark dataset for fake news detection. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 422-426. +Xiang Wang, Xiangnan He, Meng Wang, Fuli Feng, and Tat-Seng Chua. 2019. Neural graph collaborative filtering. In Proceedings of the 42nd international ACM SIGIR conference on Research and development in Information Retrieval, pages 165-174. +Chuxu Zhang, Dongjin Song, Chao Huang, Ananthram Swami, and Nitesh V Chawla. 2019. Heterogeneous graph neural network. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 793-803. +John M Ziman. 1991. Reliable knowledge: An exploration of the grounds for belief in science. Cambridge University Press. + +# A Appendix On Reproducibility + +# A.1 Experimental Environment + +This experiment runs on GPU V100 and CentOS 7 servers. The code is implemented with Tensorflow. + +# A.2 Reproducibility + +# A.2.1 Code Resources + +We compared the proposed framework, RSPE, with 11 baseline methods discussed in Section 5.3, the prediction task methods including LR, SVM, GBDT, text-CNN, text-RNN, MPCN, HetGNN, HGAT, DCA and the explanation task methods including Lexrank, NRT and DCA. Our proposed framework, RSPE's code that we have implemented are available through the following link: https://github.com/Complex-data/RSPE. Other codes were obtained as follows: + +- LR, SVM, GBDT: we used the scikit-learn, which is a publicly machine learning project at: https://scikit-learn.org/stable/index.html +- text-CNN: we used the publicly available implementation at: https://github.com/FinIoT/text_cnn +- text-RNN: we used the publicly available implementation at: https://github.com/luchi007/RNN_Text-Classify +- MPCN: we used the publicly available implementation at: https://github.com/vanzytay/KDD2018_MPCN +- HetGNN: we used the publicly available implementation at: https://github.com/ chuxuzhang/ KDD2019_HetGNN +- HGAT: we used the publicly available implementation at: https://github.com/BUPT-GAMMA/HGAT +- DCA: we used the publicly available implementation at: https://github.com/Complex-data/ +- Lexrank: we used the publicly available implementation at: https://github.com/crabcamp/lexrank +- NRT: we used the publicly available implementation at: https://github.com/lipiji/NRT-theano + +# A.2.2 Data Processing + +We can download dataset from DianPing3 and Yelp4. Because of the contents of the datasets are + +different, we conduct data processing for these two datasets respectively. + +Dianping: the download content includes two files. One is checkins.json, and the other one is business.json. checkins.json includes all user and shop review records, while business.json includes all records about shops. The data processing steps are as follows: 1) Read checkins and business data, and merge these according to restId. 2) Filter non-restaurant data. 3) Filter cities, in our experiment, we used Beijing, Shanghai and Guangzhou data. 4) Filter out $10\%$ of users and restaurants with few reviews. 5) Select the attributes required for the experiment: userid, restId, review, label. + +Yelp: the download content include two files. One is review.json, and the other one is business.json. review.json includes all user and shop review records, while business.json includes all records about shops. The data processing steps are as follows: 1) Read review and business data, and merge these according to restId. 2) Filter Year, we only use data from 2019. 3) Since Yelp data doesn't have a detailed survival status, we determined restaurants by determining whether RestaurantsReservations exist. If this field exists, it means that the store is a restaurant. 4) Filter states, in our experiment, we used Nevada, Arizona and Ontario. 5) Filter out $10\%$ of users and restaurants with few reviews. 6) Select the attributes required for the experiment: userid, restId, review, label + +# A.2.3 Pre-trained Model + +We encode words and sentences by UniLM model. First, we need to download UniLM model from https://github.com/microsoft/unilm. Then, we use TensorFlow to load the UniLM model, which provides that have been trained. Then, we add our training data to continue training. Finally, we can get a semantic vector representation for each sentence through this pretrained model. + +# A.2.4 Hyperparameter Setting + +For hyperparameter settings for RSPE, we introduce the details of major hyperparameter setting as shown in Table 6. In our experiments, we set $\lambda_{1}$ (pred_lambda=1) by default, and then tune the model by adjusting $\lambda_{2}$ (gen_lambda). The descriptions of the major hyperparameter are as follows: + +- gen_lambda: the threshold to control the generating loss weight. +- rnn_type: the threshold to control the compositional model name. + +Table 6: The details of the parameters of RSPE + +
ParameterSetParameterSet
rnn_typeRSPEl2_reg1.00E-06
optAdamlen_penalty2
emb_size50implicit1
rnn_size30att_poolMAX
rnn_dim400dmax50
use_lower1beam_size12
dropout0.8init_typexavier
gen_lambda (λ2)0.01beam_number4
rnn_dropout0.8emb_dropout0.8
lr0.001epochs50
att_reuse0rnn_layers1
pred_lambda (λ1)1
+ +- emb_size: the threshold to control the embeddings dimension. +- rnn_size: the threshold to control the model-specific dimension. +- epoch: the threshold to control the number of epochs. +- lr: the threshold to control the learning rate. +- att_pool: the threshold to control the pooling type for attention. +- dmax: the threshold to control the max number of reviews +- beam_size: the threshold to control the beam search size +- pred_lambda: the threshold to control the weight of prediction task + +# A.2.5 Evaluation + +- Results on the Prediction Task: we use AUC to evaluate the prediction task, and execute test_RSPE.py to get the accuracy in the test set. +- Results on the Explanation Task: BLEU and ROUGE are used to evaluate the explanation task. For BLEU metrics, we execute evaluate/ compute_bleu.py to get the result score. For ROUGE metrics, we used the publicly available implementation at: https://github.com/kavgan/ROUGE-2.0. \ No newline at end of file diff --git a/ajointlearningframeworkforrestaurantsurvivalpredictionandexplanation/images.zip b/ajointlearningframeworkforrestaurantsurvivalpredictionandexplanation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..7a55c3349b00bf1a3d638d834f4bf5b020078174 --- /dev/null +++ b/ajointlearningframeworkforrestaurantsurvivalpredictionandexplanation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e21bca23ca92a093e5bc672625caa82eddc5862c32225c466167606ed5bb3b08 +size 650106 diff --git a/ajointlearningframeworkforrestaurantsurvivalpredictionandexplanation/layout.json b/ajointlearningframeworkforrestaurantsurvivalpredictionandexplanation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..12bdccb05ecad55ad882af01ca2512a2c69bdcb9 --- /dev/null +++ b/ajointlearningframeworkforrestaurantsurvivalpredictionandexplanation/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5036b9ae6b93848750a49a1ce28e60894b485edc22380a5cdd26ccd8fe74ddff +size 467991 diff --git a/alfredlinvestigatingtheroleoflanguageforactionlearningininteractivevisualenvironments/2db1605a-1f4c-428a-b479-39f2fab18e02_content_list.json b/alfredlinvestigatingtheroleoflanguageforactionlearningininteractivevisualenvironments/2db1605a-1f4c-428a-b479-39f2fab18e02_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..14a408c34f0086c3b8d5c814fe1cdac0b6cb040c --- /dev/null +++ b/alfredlinvestigatingtheroleoflanguageforactionlearningininteractivevisualenvironments/2db1605a-1f4c-428a-b479-39f2fab18e02_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fb943bcc8c85f05a57d1be3d5a53faf0885b8f83a59846460ce20902456a2ba1 +size 73624 diff --git a/alfredlinvestigatingtheroleoflanguageforactionlearningininteractivevisualenvironments/2db1605a-1f4c-428a-b479-39f2fab18e02_model.json b/alfredlinvestigatingtheroleoflanguageforactionlearningininteractivevisualenvironments/2db1605a-1f4c-428a-b479-39f2fab18e02_model.json new file mode 100644 index 0000000000000000000000000000000000000000..fcce14f37ac1c71816073c397ee70c521a797f55 --- /dev/null +++ b/alfredlinvestigatingtheroleoflanguageforactionlearningininteractivevisualenvironments/2db1605a-1f4c-428a-b479-39f2fab18e02_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2094453252418200d21fa92227631d7b24969003a881a820f504f5a087826168 +size 83943 diff --git a/alfredlinvestigatingtheroleoflanguageforactionlearningininteractivevisualenvironments/2db1605a-1f4c-428a-b479-39f2fab18e02_origin.pdf b/alfredlinvestigatingtheroleoflanguageforactionlearningininteractivevisualenvironments/2db1605a-1f4c-428a-b479-39f2fab18e02_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..f41463a5fc56464f71e37dc1b1d94ca79c7ee5bf --- /dev/null +++ b/alfredlinvestigatingtheroleoflanguageforactionlearningininteractivevisualenvironments/2db1605a-1f4c-428a-b479-39f2fab18e02_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0bb154c238a44104d87f52770d71d990485786d69395ab88bd513c94af587069 +size 950706 diff --git a/alfredlinvestigatingtheroleoflanguageforactionlearningininteractivevisualenvironments/full.md b/alfredlinvestigatingtheroleoflanguageforactionlearningininteractivevisualenvironments/full.md new file mode 100644 index 0000000000000000000000000000000000000000..ea5a87f52c168b0f9339e5e4aa046da689e212b6 --- /dev/null +++ b/alfredlinvestigatingtheroleoflanguageforactionlearningininteractivevisualenvironments/full.md @@ -0,0 +1,340 @@ +# ALFRED-L: Investigating the Role of Language for Action Learning in Interactive Visual Environments + +Arjun R. Akula $^{1*}$ , Spandana Gella $^{2}$ , Aishwarya Padmakumar $^{2}$ , Mahdi Namazifar $^{2}$ , Mohit Bansal $^{2,3}$ , Jesse Thomason $^{2,4}$ , Dilek Hakkani-Tür $^{2}$ + +1Google AI, 3University of North Carolina at Chapel Hill + +$^{2}$ Amazon Alexa AI, $^{4}$ University of Southern California + +arjunakula@google.com, sgella@amazon.com, padmakua@amazon.com + +mahdinam@amazon.com, mbansal@cs.unc.edu, jessetho@usc.edu, hakkanit@amazon.com + +# Abstract + +Embodied Vision and Language Task Completion requires an embodied agent to interpret natural language instructions and egocentric visual observations to navigate through and interact with environments. In this work, we examine ALFRED (Shridhar et al., 2020), a challenging benchmark for embodied task completion, with the goal of gaining insight into how effectively models utilize language. We find evidence that sequence-to-sequence and transformer-based models trained on this benchmark are not sufficiently sensitive to changes in input language instructions. Next, we construct a new test split - ALFRED-L to test whether ALFRED models can generalize to task structures not seen during training that intuitively require the same types of language understanding required in ALFRED. Evaluation of existing models on ALFRED-L suggests that (a) models are overly reliant on the sequence in which objects are visited in typical ALFRED trajectories and fail to adapt to modifications of this sequence and (b) models trained with additional augmented trajectories are able to adapt relatively better to such changes in input language instructions. + +# 1 Introduction + +Recently a number of benchmark datasets have been proposed to study the ability of embodied agents to understand natural language in the context of egocentric visual observations and predict sequences of executable actions to answer questions (Das et al., 2018), navigate to desired destinations (Anderson et al., 2018; Chen et al., 2019), or additionally manipulate objects to complete tasks (Shridhar et al., 2020; Padmakumar et al., 2021). + +Although multi-modal transformer-based models have achieved tremendous progress on many of these datasets (Zhu et al., 2021b; Hong et al., 2021; + +High-level Goal : Move a spoon to pan + +![](images/406d1bf52ab47604a2a371ffa1dd11df6d5499a5d32165715ab54b0af1f9c68d.jpg) +Low-level Instructions: + +$\angle 1 > \angle 3$ Walk to the coffeemaker on the end of the counter. + +$\angle 2>$ Grab the spoon from the counter. $\angle 3>$ Go to the stove and focus on the + +top left burner. + +<14>: Place the spoon in the pan. +<15>: Turn around and go to the sink + +-1!x: Walk to the coffee maker on the end of the counter. + +<13>: Go to the stove and focus on the top left burner. + +<15>: Turn around and go to the kitchen sink. + +NAVIGATION-ONLY + +<11>: Walk to the coffee maker on the end + +of the counter. + +<12>: Grab the spoon from the counter. +<13>: Go to the stove and focus on the top left burner. + +14: Place the spoon in the pan. + +<15>: Turn around and go to the kitchen sink. + +<15->: Go back to the stove and focus on the top left burner. + +REVERSE-ONE + +$\langle < 1>$ Walk to the coffee maker on the end of the counter. + +<12>: Grab the spoon from the counter. +<13>: Go to the stove and focus on the top left burner. + +<14>: Place the spoon in the pan. + +<15>: Turn around and go to the kitchen sink. + +<15-> Go back to the stove and focus on the top left burner + +<13-> Now walk again to the coffee maker on the end of the counter + +REVERSE-n + +Figure 1: An example test trajectory for task type Pick and Place from ALFRED. We modify the original trajectory in three different ways to create ALFRED-L test set (highlighted in red boxes): (a) NAV-ONLY subset picks only the navigation instructions (e.g. I1, I3, I5) to form a new trajectory; (b) REV-1 subset extends the original ALFRED trajectory by adding an additional reverse instruction to take the agent back by one navigation step (e.g. I5-r is the reverse step formed from I5); (c) REV-n subset extends the original ALFRED trajectory by adding one or more reverse navigation steps. + +Pashevich et al., 2021; Zhang and Chai, 2021), analysis of such models on other visual grounding tasks has suggested that they could be learning reasoning shortcuts and exploiting unintended biases without comprehending the underlying linguistic structure (Thomason et al., 2019; Zhu et al., 2021a; Chiang et al., 2021; Akula et al., 2020; Thrush et al., 2022; Akula et al., 2021). + +In this work, we analyze models trained on the ALFRED dataset (Shridhar et al., 2020) to better understand how they utilize language for embodied task completion. We chose ALFRED for our analysis as it requires object manipulation (object pick and place, opening and closing doors and more) in addition to navigation making it more challenging than most related datasets. In ALFRED, each example trajectory consists of a high-level natural language task description, followed by step-by-step (low-level) natural language instructions corresponding to logical subgoals, that when completed in sequence accomplish the task (see Fig 1). + +In this work, we refer to this sequence of logical subgoals as a task structure - ALFRED trajectories consists of only 7 possible task structures. In each trajectory, an embodied agent is placed in the initial state of the environment, provided the language instructions, and expected to predict and execute a sequence of low level actions that accomplish the task described by the instructions using visual feedback from the execution of each action. + +We evaluate the sensitivity of ALFRED models to changes in language instructions in two ways. We first examine whether model predictions are affected by the removal of words indicative of spatial relationships, or by the removal of step-by-step instructions entirely ( $\S 2$ ). Our experiments demonstrate that model performance is less affected by these changes than expected. In contrast, the task completion rate of models drops to $0\%$ when deprived of visual inputs. + +In addition, we construct a new test set ALFRED-L, to test whether models can generalize to variations of ALFRED instructions that remove object manipulation steps or add navigation steps (§3). A sample from this dataset is shown in Fig 1. Intuitively, these changes do not require learning of new language understanding capabilities since following navigation instructions is already a prerequisite for successfully completing ALFRED tasks. Consequently, existing ALFRED models should be able to generalize well to such instructions. However, experimental results on ALFRED-L demonstrate that models are incapable of such generalization. We hypothesize that models overfit to the task structure of ALFRED and ignore the addition of extra objects to be visited in ALFRED-L. Further, we find that models trained using a larger number of visual scenes are able to adapt relatively better to such changes in input language instructions suggesting the importance of data augmentation techniques to make substantive progress on ALFRED. + +# 2 Analysis by Reducing Instruction Informativeness + +In this section, we examine the sensitivity of models to loss of information from language instructions when trying to complete ALFRED tasks. In our first experiment E1, we drop all words and phrases that indicate directional and spatial information from high and low level language instructions during inference. + +We note that $81\%$ of the tokens in ALFRED + +
ModelVal-U %Val-S %E1-U (Δ)%E1-S (Δ)%E2-U (Δ)%E2-S (Δ)%
ET2.731.5(40.5)(35.3)(5.6)(1.1)
ET w/o PT2.126.2(42.9)(31.6)(6.2)(1.3)
ET+Synth6.544.7(36.9)(21.2)(3.8)(0.9)
HiTuT12.425.2(30.6)(29.8)(6.7)(4.3)
MOCA3.719.1(21.5)(20.0)(0.0)(2.1)
Seq2Seq0.03.7(0.0)(0.0)(0.0)(1.5)
+ +Table 1: Task Success Rate (in percentage) of models on ALFRED validation unseen (Val-U) and seen (Val-S) splits. In perturbation E1, all the directional and spatial words are dropped from instructions. In perturbation E2, we drop all the language instructions and just keep the higher level task description. The relative percentage drop in success rate (shown in parentheses) before and after the perturbations is shown in the last four columns. All the numbers reported here are obtained by taking average across five experiments with different seeds. + +instructions constitute directional and spatial information such as to the left, three steps forward, towards right, and over to the back. We hypothesize that the absence of such crucial information should cause models to be unable to correctly navigate or identify objects being referred to. + +In our second experiment E2, we discard all the step-by-step language instructions from the input $^{1}$ (See Appendix A for more details). We analyze the following models trained on ALFRED: (a) Episodic Transformer (ET): a model that uses a transformer to encode multimodal inputs (Pashevich et al., 2021); ET+Synth: a version of ET augmented with synthetic trajectories; ET w/o PT: an ablated version that does not include language pretraining; (b) HiTuT: a hierarchical transformer-based model that explicitly predicts sub-goals in addition to low-level actions at every time step to enable backtracking to cope with execution failures (Zhang and Chai, 2021) (c) MOCA: a modular sequence-to-sequence model that separates action and object prediction (Singh et al., 2020); (d) Seq2Seq: a simple sequence-to-sequence baseline (Shridhar et al., 2020). + +Table 1 shows the overall task success rates of models on validation seen and unseen splits before and after perturbations ${}^{2}$ . Table 2 shows the sub-goal success rates of the ET+Synth model ${}^{3}$ . Sub-goal success rate measures the ability of a model to accomplish the next sub-goal conditioned on the preceding ground-truth expert sequence. In perturbation E1, we find up to 40% relative drop in overall task success rates. Surprisingly, we see only + +
Sub-GoalVal-U %Val-S %E1-U (Δ)%E1-S (Δ)%E2-U (Δ)%E2-S (Δ)%
CleanObject91.288.4(18.5)(16.1)(2.5)(1.3)
CoolObject99.195.2(-0.9)(0.0)(0.0)(0.0)
GoToLoc.50.780.0(-4.3)(1.7)(-0.5)(0.2)
HeatObject99.394.4(0.8)(6.2)(0.0)(1.1)
PickupObject69.075.9(4.4)(0.8)(2.1)(0.0)
PutObject69.884.5(6.5)(0.9)(1.0)(0.0)
SliceObject65.889.7(1.3)(5.4)(0.0)(1.8)
ToggleObject83.298.9(0.0)(-1.1)(0.0)(0.0)
+ +Table 2: Sub-goal Success Rate of ET+Synth model on AL-FRED. The relative percentage drop in success rate (shown in parentheses) before and after the perturbations is shown in the last four columns. Negative percentages denote that performance of sub-goal improved after the perturbation. + +![](images/f559520636a4c8386269cedeae057c0e7634d285da7b28243c6c3c5a46c4849b.jpg) +Figure 2: We examine the last subgoal of a trajectory an agent reaches before failing before and after perturbation. Most failures are in navigation (GoToLocation) and picking up objects (PickUp), which increase upon perturbation. + +$< 7\%$ relative drop on all the sub-goals (except for CleanObject), casting doubt whether models are effectively utilizing language instructions. We explain these two contrasting observations by examining the most common sub-goal failures and low-level API action failures in overall task completion. In Fig 2, we can observe that the highest failure rate is for the GoToLocation sub-goal. We further examine the last incorrect action prediction that caused the failure within each subgoal and we find that most of the failures are caused by attempting to perform PickUp and Put actions within a GoToLocation sub-goal where an agent is only expected to perform navigation (shown in Fig 3). We also observe that dropping directional and spatial words from instructions further increases this model bias to perform PickUp and Put actions even when there is no object in view to be manipulated, leading to more failures in completing the overall task. On the other hand, with perturbation E2, we do not see any significant drop in model performance in both task and sub-goal success rates (Table 1; columns $E2-U(\Delta)\%$ and $E2-S(\Delta)\%$ ). This indicates that these models fail to make effective use of the language instructions and instead exploit + +![](images/dba2c6747d406a50392ff4ff18aecf8124d4fdf7741ac2ea736aea9f91100535.jpg) +Figure 3: Most predicted trajectories fail in ALFRED due to predicting low level actions when these are infeasible. For each subgoal, we examine the percentage of trajectories that failed by last predicting particular actions. We observe that most failures are from attempting to pick up an object when the agent needs to navigate. + +shallower visual correlations4. + +We additionally perform an experiment where we drop visual input instead of dropping language instructions. The accuracy drops to $0\%$ in this case (on both val seen and unseen splits), indicating the higher influence of visual input on model performance. + +# 3 ALFRED-L for Testing Generalization + +ALFRED step-by-step language instructions involve a combination of navigation subgoals and a variety of object manipulation subgoals. Intuitively, a model that can understand such language instructions should still be able to follow modified combinations of them that add or remove some steps. To evaluate the ability of ALFRED models to generalize in this manner, we create modified instructions using examples from ALFRED validation splits. We call this modified test split ALFRED-L. + +More concretely, training samples in ALFRED are typically a sequence of alternating navigation (GoToLocation) and object manipulation (for example CleanObject, HeatObject) subgoals, always ending with an object manipulation subgoal. Models would thus expect to perform object manipulations at the end of each navigation subgoal and could memorize locations of objects commonly + +Turn right, move to the fireplace, turn left, move to the white shelf. Pick up the open box on the bottom shelf. Turn around, move across the couch to the right and to the lamp in the corner. Turn on the tall lamp in the corner. Move back to the white shelf. + +![](images/96a9ea1ecb45d5d4ba568c118a2b77f43916467bbc113759da0109d13b0f6703.jpg) + +![](images/1fd9930a2ee7c43d04c64eb78eac0c3f5829c9662f0e3c47f14d17a330bd2d76.jpg) + +![](images/ca29fa167b29ca59dcfc92cb2e2ef31d81d65bb5424e2436554ef921d269d59f.jpg) + +![](images/333ce9b8dca420cf2c9ff6de7212813a65c4165318a6720a00f13f1cd9a34083.jpg) + +![](images/17ac747e4e97c12550017e5648b99978076eb18906c5893e5b42cd847bc731f4.jpg) + +![](images/bd0fbb4660ee16660bd6d8d8d90e0a9df4ce27427bfa1913088fb298cedc9e6a.jpg) + +![](images/553eae4b4fedef8564cd4df72586dcfcd2d0b666af01aada679402eb8c011948.jpg) + +![](images/202e6bc2dc9698d1556b1d79118a666fe2c940af0a8dd6f7d0e374ddd2be1487.jpg) + +![](images/67ee38917f7333e2ee71a62d4efbf015ae74729215151002f89cc8f7701cc22c.jpg) + +![](images/c6725cf98ea79238d28860f684e14dd587b382d9aaff168d3f9460be55ebc08b.jpg) + +![](images/bbd3e9cc62201387b625965f410082c139bb610dd74c281ae917f5fd8328f2ca.jpg) + +![](images/ba6af21eff78a7fdbe5c6eee138241096bcbc05d62112b944e90cc5cd3d9ab4e.jpg) + +![](images/8114bfe6cf438b1196bc2ae8b8a840853ae9f7bc73b51094be167a6ddfbe33f8.jpg) + +![](images/fd2d6772f450e75e0b45bf5c87de6906501ddc509cc28fefdd6f11d559c5d106.jpg) + +![](images/cc765d44cdf5b8142144b0c7067ec62ad1f661cb5a2b32a45e2919996eb79fa1.jpg) + +![](images/26edc8874180b18edf9b4015430ef0f8af1aa38d9ad9b28b6cda3a0439b20752.jpg) +Figure 4: An example from ALFRED-L highlighting the re-generated visual frames (last row, $t = 101$ to 124) corresponding to the REV-1 language annotation (text at the top, highlighted in red). + +![](images/06e47f7b041f131ff7ea5d30e0a1ebadb8dbad81f3fbe98e1b2f5ac06a13fc82.jpg) + +![](images/35343e1a4060cb27abc92e82a3a31acba8cc5d84324d3039d568ad8ab8eeb080.jpg) + +![](images/ac4e05622357380725dbc282aa6b8c6514be8bfc7b49eb9a887da2625a0b9ac1.jpg) + +![](images/6179e39ec271d19655e21c331928fbb6fb06872c83b26700ff6b73ad9640513c.jpg) + +navigated to, and exploit these instead of understanding instructions provided at inference time to determine objects to be navigated to. We create ALFRED-L to break some of these patterns - removing the need for object manipulation actions and adding instructions to navigate to additional objects. A model that sees a significant performance drop between ALFRED and ALFRED-L is likely overly reliant on the task structure of ALFRED and ignoring details present in language instructions. + +As shown in Fig 1 and Table 3, ALFRED-L consists of 3 subsets: + +(a) NAV-ONLY (Navigation-only): This is constructed by removing instructions for all object manipulation steps. An agent that understands the change made to the instructions would navigate along the same trajectory as before but without interacting with any objects. +(b) REV-1 (Reverse-1) and REV-n (Reverse-n): These add additional navigation instructions instructing the agent to backtrack to known reference positions along the trajectory. REV-1 adds one backtracking navigation step to the original ALFRED trajectory and REV-n adds more than one backtracking navigation step. These evaluate whether an + +agent is capable of remembering a point it had navigated to during execution so far, and navigating back to it without the expectation of performing further object manipulations $^{5}$ . In other words, using REV-1 and REV-n, our goal is to detect if the embodied agent overfit to the seen task structures as task structures in the existing unseen test splits of ALFRED only evaluate the generalization to unseen environments but fail to test generalization to unseen task structures. We expect the model to learn the capability because in collecting our language instructions for REV-1 and REV-n splits, we only leveraged the words that are used in the original ALFRED train dataset $^{6}$ . Figure 4 shows an example for the re-generated visual frames for the REV-1 instruction. + +# 3.1 Evaluation on ALFRED-L + +Table 4 shows the experimental results on ALFRED-L. Interestingly, the performance of all + +![](images/01b59cb817c1109289baa4bf59a57081b4787b36cb5b7bb7abd11912ae776f71.jpg) +Figure 5: Performance on ALFRED-L seen splits with different proportions of masked directional and spatial words. + +![](images/df62055769076e98c1bc613b5490981a63787d1e1ab2b45d8778046032ad45b0.jpg) + +![](images/8ffe3842df2e1d2dad9c0fad1569a260bac8f242ee3bc606f447297367a5669e.jpg) + +
ALFREDALFRED-LNAVREV-1REV-n
Trajectories5061024506375143
Anns.1641332616411219466
Sub-goals1071018919511191834625
+ +tested models increases by up to $>30\%$ (relative) on NAV-ONLY; whereas the performance drops by up to $91\%$ (relative) on the REV-1 split and REV-n splits. Clearly the models fail to perform reverse navigation steps and the non-zero success rate on REV-1 and REV-n results from test samples where the model's destination in the original test set and the reverse navigation step is within the reachability threshold7. We hypothesize that NAV-ONLY performance is higher than that on the ALFRED test set as the agent has to perform a similar trajectory allowing the use of previously memorized knowledge about object positions but does not have to be able to frame or segment objects correctly as these do not need to be manipulated. These observations strengthen our conclusions made in section 2 that models tend to rely heavily on ALFRED task structure and visual input and ignore details present in language instructions. + +We also test the models trained in section 2 on ALFRED-L. These models are trained with language instructions where different proportions of directional and spatial words are masked out. From the results of these experiments presented in Fig 5 we observe that the models do not show any sensitivity to these perturbations. Overall, $\mathbf{E}\mathbf{T} + \mathbf{S}\mathbf{n}\mathbf{t}\mathbf{h}$ is relatively more sensitive and generalizes better to ALFRED-L compared to other models, indicating that augmenting ALFRED training data with additional trajectories helps enable models to better utilize language. + +Table 3: Statistics of ALFRED (val seen + unseen) and ALFRED-L (seen + unseen) test splits. + +
ModelALFREDALFRED-L
SUNAV-ONLYREV-1REV-n
SUSUSU
ET31.52.739.911.22.60.52.10.0
ET w/o PT26.22.138.18.72.10.51.80.0
ET+Synth44.76.551.319.69.22.37.81.9
HiTuT25.212.435.620.22.42.01.80.6
MOCA19.13.720.64.50.50.00.00.0
Seq2Seq3.70.03.90.00.00.00.00.0
+ +Table 4: Comparison of model performance (Task Success Rate in percentage) on Validation Seen (S), Unseen (U) splits between ALFRED and ALFRED-L splits. + +# 4 Conclusion + +We evaluate embodied task completion models trained on ALFRED and find that they are not very sensitive to loss of spatial and directional information, or detailed task steps. We also present a new test split ALFRED-L to test generalization to novel task structures and find that models are unable to adapt to the addition of extra reverse navigation steps. + +We hope that our work guide the development of future embodied AI benchmarks (and models) to avoid the issues we identified with ALFRED. In addition, our analysis at the sub-goal level on unseen environments (unseen test) and on our proposed ALFRED-L test split help test different generalization aspects and therefore can potentially represent major failure modes in other embodied AI datasets. + +# 5 Limitations + +We analyze ALFRED models for their sensitivity to modifications in input language instructions. However since our analysis is restricted to dataset in English, we are unsure whether similar behavior will be observed in languages other than English for embodied task completion. We hypothesize that such behavior will be less likely in datasets spanning more tasks or where less of the scene is visible from any given position the agent is in. + +Additionally we do not include models that make semantic maps of the environment in this analysis. However, some such works (Blukis et al., 2022; Min et al., 2021) have stated that the difference in performance when step-by-step instructions are removed is low. + +Another limitation of our work is that unlike in the creation of ALFRED where each trajectory is annotated with 3 (or more) sets of language instructions, in ALFRED-L we only provide a single language instruction for the additional reverse navigation steps added in REV-1 and REV-n splits. Since the examples in REV-1 and REV-n include all the steps of the original ALFRED trajectory, we rely on the diversity between the original sets of ALFRED instructions to separate the resultant instructions in ALFRED-L. + +# References + +Arjun Akula, Spandana Gella, Yaser Al-Onaizan, Songchun Zhu, and Siva Reddy. 2020. Words aren't enough, their order matters: On the robustness of grounding visual referring expressions. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6555-6565. +Arjun Akula, Spandana Gella, Keze Wang, Song-chun Zhu, and Siva Reddy. 2021. Mind the context: the impact of contextualization in neural module networks for grounding visual referring expressions. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6398-6416. +Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko Sünderhauf, Ian Reid, Stephen Gould, and Anton Van Den Hengel. 2018. Vision- and-language navigation: Interpreting visually-grounded navigation instructions in real environments. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3674-3683. +Valts Blukis, Chris Paxton, Dieter Fox, Animesh Garg, and Yoav Artzi. 2022. A persistent spatial semantic representation for high-level natural language instruction execution. In Conference on Robot Learning, pages 706-717. PMLR. +Howard Chen, Alane Suhr, Dipendra Misra, Noah Snavely, and Yoav Artzi. 2019. Touchdown: Natural language navigation and spatial reasoning in visual street environments. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12538-12547. +Ting-Rui Chiang, Yi-Ting Yeh, Ta-Chung Chi, and Yau-Shian Wang. 2021. Are You Doing What I Say? On Modalities Alignment in ALFRED. In Novel Ideas + +in Learning-to-Learn through Interaction Workshop at EMNLP 2021. +Abhishek Das, Samyak Datta, Georgia Gkioxari, Stefan Lee, Devi Parikh, and Dhruv Batra. 2018. Embodied question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1-10. +Yicong Hong, Qi Wu, Yuankai Qi, Cristian Rodriguez-Opazo, and Stephen Gould. 2021. Vln bert: A recurrent vision-and-language bert for navigation. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 1643-1653. +So Yeon Min, Devendra Singh Chaplot, Pradeep Ravikumar, Yonatan Bisk, and Ruslan Salakhutdinov. 2021. Film: Following instructions in language with modular methods. arXiv preprint arXiv:2110.07342. +Aishwarya Padmakumar, Jesse Thomason, Ayush Shrivastava, Patrick Lange, Anjali Narayan-Chen, Spandana Gella, Robinson Piramuthu, Gokhan Tur, and Dilek Hakkani-Tur. 2021. TEACH: Taskdriven Embodied Agents that Chat. arXiv preprint arXiv:2110.00534. +Alexander Pashevich, Cordelia Schmid, and Chen Sun. 2021. Episodic transformer for vision-and-language navigation. arXiv preprint arXiv:2105.06453. +Mohit Shridhar, Jesse Thomason, Daniel Gordon, Yonatan Bisk, Winson Han, Roozbeh Mottaghi, Luke Zettlemoyer, and Dieter Fox. 2020. Alfred: A benchmark for interpreting grounded instructions for everyday tasks. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10737-10746. +Kunal Pratap Singh, Suvaansh Bhambri, Byeonghwi Kim, Roozbeh Mottaghi, and Jonghyun Choi. 2020. Moca: A modular object-centric approach for interactive instruction following. arXiv preprint arXiv:2012.03208. +Jesse Thomason, Daniel Gordon, and Yonatan Bisk. 2019. Shifting the baseline: Single modality performance on visual navigation & qa. In NAACL. +Tristan Thrush, Ryan Jiang, Max Bartolo, Amanpreet Singh, Adina Williams, Douwe Kiela, and Candace Ross. 2022. Winoground: Probing vision and language models for visio-linguistic compositionality. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5238-5248. +Yichi Zhang and Joyce Yue Chai. 2021. Hierarchical task learning from language instructions with unified transformers and self-monitoring. ArXiv, abs/2106.03427. +Wanrong Zhu, Yuankai Qi, P. Narayana, Kazoo Sone, Sugato Basu, Xin Eric Wang, Qi Wu, Miguel P. Eckstein, and William Yang Wang. 2021a. Diagnosing vision-and-language navigation: What really matters. ArXiv, abs/2103.16561. + +Wanrong Zhu, Xin Wang, Tsu-Jui Fu, An Yan, P. Narayana, Kazoo Sone, Sugato Basu, and William Yang Wang. 2021b. Multimodal text style transfer for outdoor vision-and-language navigation. In EACL. + +# Appendix + +In this supplementary material, we begin by providing more details on our perturbation experiments to supplement Section 2 of the main paper. We then present additional details on our ALFRED-L annotation, and show a few examples randomly sampled from ALFRED-L to supplement Section 3. + +# A Reducing Instruction Informativeness + +As discussed in Section 2 of the main paper, in perturbation experiment E1, we drop all directional and spatial words from both high and low level language instructions. Note that, in E1, we only perturb samples during inference and use original unperturbed data during training. In Figure 6 we present a examples of these perturbations. On the other hand, in experiment E2, we discard all low level language instructions during both training and inference, and do not perform any perturbation on high level language instructions. We closely followed the original set up used by the ET and other models proposed for ALFRED dataset (batch size, learning rate, pre-training, iterations, etc) for training and inference. All these models are trained using 4 to 8 NVIDIA A100 and V100 instances. Table 5 and Table 6 present the absolute percentage of task and sub-goal success rate of the models in E1 and E2 settings - corresponding to Table 1 and Table 2 in Section 2 of the main paper. + +# B ALFRED-L Test Splits + +As discussed in Section 3 of the main paper, we construct ALFRED-L test split by performing modifications to the trajectories from ALFRED validation Seen and Unseen splits. In the below subsections we provide the details on the creation of the ALFRED-L splits and the modifications made to the trajectories. + +# B.1 NAV-ONLY subset + +This is constructed by removing language instructions for all object manipulation steps. After deleting all the interaction sub-goals, we re-generate the visual scenes using render_trajs.py script from https://github.com/alepashevich/E. + +![](images/8cb2038771e640a94f495d1b98c623f5ba0513107ec87f09aae308aebf2987dc.jpg) +Figure 6: Examples for experiment E1 perturbations + +
ModelVal-U %Val-S %El-U %El-S %E2-U %E2-S %
ET2.731.51.620.32.531.1
ET w/o PT2.126.21.217.92.025.8
ET+Synth6.544.74.135.26.244.1
HiTuT12.425.28.617.611.524.1
MOCA3.719.12.915.23.718.7
Seq2Seq0.03.70.03.70.03.6
+ +Table 5: Task Success Rate (in percentage) of models on ALFRED validation unseen (Val-U) and seen (Val-S) splits in E1 and E2. + +T./tree/master/alfred/gen - to make the visual inputs to be consistent with the language inputs. + +For example, consider a sample json structure for the trajectories in ALFRED as shown in Figure 7. To create NAV-ONLY trajectory from this json, + +(a) We first collect the high-level indices (highidx) of all the GoToLocation sub-goals using the json object plan -> high_pddb1. +(b) We next filter out the navigation low-level actions indices (i.e. lowidx in low_actions json object) for the corresponding navigation high-level indices filtered in previous step. +(c) Next, we remove all the images from the images json object which does not contain the selected high_idx and low_idx values. +(d) We then remove all the language annotations for the manipulation actions in the turk_annotations -> annots -> high_descs based on the selected high_idx. Note that high-level indices (high_idx) has one-to-one mapping with the language annotations. +(e) We pass this updated json to the render_traj.py script to re-generate the images. + +Table 7 and Table 8 show few examples of the language annotations in the original trajectory and in the modified NAV-ONLY trajectory. Note that we explicitly capture the final expected position of the agent and modify the original ALFRED evalua + +
Sub-GoalVal-U %Val-S %E1-U %E1-S %E2-U %E2-S %
CleanObject91.288.474.374.188.987.2
CoolObject99.195.299.995.299.195.2
GoToLocation50.780.052.878.650.979.8
HeatObject99.394.498.588.599.393.3
PickupObject69.075.965.975.267.575.9
PutObject69.884.565.283.769.184.5
SliceObject65.889.764.984.865.888.0
ToggleObject83.298.983.299.983.298.9
+ +Table 6: Sub-goal Success Rate of ET+Synth model on ALFRED ALFRED validation unseen (Val-U) and seen (Val-S) splits in E1 and E2. + +tion pipeline to only consider this final position for computing task success rate. While evaluating model performance on NAV-ONLY split, if the agent is within 5 steps (i.e. reachability threshold $\leq$ 1.25, where the step size in AI2Thor is 0.25) away from the ground-truth destination, we consider the task to be successful. + +# B.2 REV-1 and REV-n splits + +These add additional navigation instructions instructing the agent to backtrack to known reference positions along the trajectory. REV-1 adds exactly one navigation step to the original set of step-by-step instructions from ALFRED and REV-n adds more than one navigation steps. The authors of this work annotate the language instructions for these reverse steps. We perform multiple validation steps and delete the trajectories that are ambiguous or not clear. Table 7 and Table 8 show few examples of the language annotations in the original trajectory and in the modified REV-1 and REV-n trajectories. In re-generating the visual scenes for the newly added reverse instructions, we first add new sequence of low-level actions to the json structure by reversing the order of original navigation steps and then pass the updated json structure to the https://github.com/alexpashevich/E.T./tree/master/alfred/gen render_traj.py script to generate corresponding images. + +For example, if the original sequence of navigation contains low-level actions such as MoveForward $\rightarrow$ MoveForward $\rightarrow$ PickUp $\rightarrow$ RotateLeft $\rightarrow$ MoveForward $\rightarrow$ Look Down, the reversed navigation actions would be LookUp $\rightarrow$ MoveForward $\rightarrow$ RotateRight $\rightarrow$ MoveForward $\rightarrow$ MoveForward. As we can see, we interchange RotateRight and RotateLeft; LookUp and LookDown actions while backtracking. Also, + +![](images/0be0d2c1cca5426e19049af9f5db62c73a05088d48d224988ea9c9b1af5f013b.jpg) +Figure 7: JSON structure for the trajectories in ALFRED + +the first time we initiate the reverse backtracking, we perform RotateRight action twice. Moreover, all the object interaction actions such as PickUp are skipped while backtracking. + +Similar to NAV-ONLY trajectories, we explicitly capture the final expected position of the agent and modify the original ALFRED evaluation pipeline to consider final position of the agent in addition to the object interaction tasks, for computing task success rate. We set the reachability threshold to be $\leq 1.25$ , where the step size in AI2Thor is 0.25. In Table 4 of the main paper, we find the performances of all the models on REV-1 and REV-n drops to 0 on both seen and unseen splits when we decrease the reachability threshold to $\leq 0.5$ . + +
Task Type: pick clean then place in recep +High-level Goal: Put a knife in the sink before standing it on the counter.
Original InstructionsGo to the right and walk to the fridge, hang a right and go to the counter between the dishwasher and stove. +Pick up the potato that is on the counter. +Go right to the microwave. +Put the potato in the microwave, turn it on to cook, remove the potato. +Go left towards toward the fridge, then hang a left, go to the garbage can. +Put the potato in the garbage can.
NAV-ONLY InstructionsGo to the right and walk to the fridge, hang a right and go to the counter between the dishwasher and stove. +Go right to the microwave. +Go left towards toward the fridge, then hang a left, go to the garbage can.
REV-1 InstructionsGo to the right and walk to the fridge, hang a right and go to the counter between the dishwasher and stove. +Pick up the potato that is on the counter. +Go right to the microwave. +Put the potato in the microwave, turn it on to cook, remove the potato. +Go left towards toward the fridge, then hang a left, go to the garbage can. +Put the potato in the garbage can. +Walk back to the microwave.
REV-n InstructionsGo to the right and walk to the fridge, hang a right and go to the counter between the dishwasher and stove. +Pick up the potato that is on the counter. +Go right to the microwave. +Put the potato in the microwave, turn it on to cook, remove the potato. +Go left towards toward the fridge, then hang a left, go to the garbage can. +Put the potato in the garbage can. +Walk back to the microwave. +Return to the counter between the dishwasher and stove.
+ +Table 7: Random example from ALFRED-L. We show original crowd-sourced instructions from ALFRED as well as our modified ALFRED-L instructions in NAV-ONLY, REV-1 and REV-n setting. + +
Task Type: pick clean then place in recep +High-level Goal: Clean a knife and put it back onto the counter.
Original InstructionsTurn left and move to the gray coffee maker to the right of the lettuce, then move to the silver dishwasher to the right of the black toaster. +Pick up the yellow handled knife to the left of the square plate from the counter. +Turn around and move to the sink to the right of the loaf of bread. +Place the knife in the sink to the left of the lettuce, turn on the faucet to rinse the knife, then pick up the knife from the sink. +Turn around and face the dishwasher underneath the green glass. +Place the knife on the plate to the rear of the potato on the counter.
NAV-ONLY InstructionsTurn left and move to the gray coffee maker to the right of the lettuce, then move to the silver dishwasher to the right of the black toaster. +Turn around and move to the sink to the right of the loaf of bread. +Turn around and face the dishwasher underneath the green glass.
REV-1 InstructionsTurn left and move to the gray coffee maker to the right of the lettuce, then move to the silver dishwasher to the right of the black toaster. +Pick up the yellow handled knife to the left of the square plate from the counter. +Turn around and move to the sink to the right of the loaf of bread. +Place the knife in the sink to the left of the lettuce, turn on the faucet to rinse the knife, then pick up the knife from the sink. +Turn around andface the dishwasher underneath the green glass. +Place the knife on the plate to the rear of the potato on the counter. +Walk back to the sink to the right of the loaf of bread.
REV-n InstructionsTurn left and move to the gray coffee maker to the right of the lettuce, then move to the silver dishwasher to the right of the black toaster. +Pick up the yellow handled knife to the left of the square plate from the counter. +Turn around and move to the sink to the right of the loaf of bread. +Place the knife in the sink to the left of the lettuce, turn on the faucet to rinse the knife, then pick up the knife from the sink. +Turn around and facing the dishwasher underneath the green glass. +Place the knife on the plate to the rear of the potato on the counter. +Walk back to the sink to the right of the loaf of bread. +Move to the silver dishwasher to the right of the black toaster.
+ +Table 8: Random example from ALFRED-L. We show original crowd-sourced instructions from ALFRED as well as our modified ALFRED-L instructions in NAV-ONLY, REV-1 and REV-n setting. \ No newline at end of file diff --git a/alfredlinvestigatingtheroleoflanguageforactionlearningininteractivevisualenvironments/images.zip b/alfredlinvestigatingtheroleoflanguageforactionlearningininteractivevisualenvironments/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..b0b91fc070a85cf8b0a7ad3b1e6ee0daf2cbc2b0 --- /dev/null +++ b/alfredlinvestigatingtheroleoflanguageforactionlearningininteractivevisualenvironments/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:daccefe0d6f50a62b85b5f5b49fd7634b066c35f24f56cc2af0979448621b843 +size 817552 diff --git a/alfredlinvestigatingtheroleoflanguageforactionlearningininteractivevisualenvironments/layout.json b/alfredlinvestigatingtheroleoflanguageforactionlearningininteractivevisualenvironments/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..417bcdcd6ce1cdf6f97bb219e3799c4dec17281e --- /dev/null +++ b/alfredlinvestigatingtheroleoflanguageforactionlearningininteractivevisualenvironments/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bd2d2d17f6a65714ee2ef5bde91ac6993c1d5292cfb603da8f58eca1e73cfaf0 +size 317478 diff --git a/algorithmsforacyclicweightedfinitestateautomatawithfailurearcs/3514e8c4-1efa-4537-9225-12d49c8895c9_content_list.json b/algorithmsforacyclicweightedfinitestateautomatawithfailurearcs/3514e8c4-1efa-4537-9225-12d49c8895c9_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..3ec0d2e48adceaf623410b1aa06dad3be8292651 --- /dev/null +++ b/algorithmsforacyclicweightedfinitestateautomatawithfailurearcs/3514e8c4-1efa-4537-9225-12d49c8895c9_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ddd5c19eca13d08e5e5d874683c620c6bd3050e3e658104b1eb842c301451197 +size 155645 diff --git a/algorithmsforacyclicweightedfinitestateautomatawithfailurearcs/3514e8c4-1efa-4537-9225-12d49c8895c9_model.json b/algorithmsforacyclicweightedfinitestateautomatawithfailurearcs/3514e8c4-1efa-4537-9225-12d49c8895c9_model.json new file mode 100644 index 0000000000000000000000000000000000000000..df3fcf06c919ed60b502731ea14321ad0e714525 --- /dev/null +++ b/algorithmsforacyclicweightedfinitestateautomatawithfailurearcs/3514e8c4-1efa-4537-9225-12d49c8895c9_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a8b2c47028617fd1337bd159adf13f293a480c6c0286f1a580aa7f96f6750ed7 +size 194906 diff --git a/algorithmsforacyclicweightedfinitestateautomatawithfailurearcs/3514e8c4-1efa-4537-9225-12d49c8895c9_origin.pdf b/algorithmsforacyclicweightedfinitestateautomatawithfailurearcs/3514e8c4-1efa-4537-9225-12d49c8895c9_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..52f30a6fd3e6967bae4c116a0712e21a700bb2b4 --- /dev/null +++ b/algorithmsforacyclicweightedfinitestateautomatawithfailurearcs/3514e8c4-1efa-4537-9225-12d49c8895c9_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dff46f623c3b7964f558a12ff32239f38dc4cb1e673bcf9a17f76327d8403633 +size 714720 diff --git a/algorithmsforacyclicweightedfinitestateautomatawithfailurearcs/full.md b/algorithmsforacyclicweightedfinitestateautomatawithfailurearcs/full.md new file mode 100644 index 0000000000000000000000000000000000000000..e3c2635333b4581ef1b4422f281c8fccdd26e43f --- /dev/null +++ b/algorithmsforacyclicweightedfinitestateautomatawithfailurearcs/full.md @@ -0,0 +1,740 @@ +# Algorithms for Acyclic Weighted Finite-State Automata with Failure Arcs + +Anej Svete1 Benjamin Dayan1 Tim Vieira2 Ryan Cotterell1 Jason Eisner2 + +$^{1}$ ETH Zürich $^{2}$ Johns Hopkins University + +{asvete, bdayan}@ethz.ch + +ryan.cotterell@inf.ethz.ch {timv, jason}@cs.jhu.edu + +# Abstract + +Weighted finite-state automata (WSFAs) are commonly used in NLP. Failure transitions are a useful extension for compactly representing backoffs or interpolation in $n$ -gram models and CRFs, which are special cases of WFSAs. The pathsum in ordinary acyclic WFSAs is efficiently computed by the backward algorithm in time $\mathcal{O}(|E|)$ , where $E$ is the set of transitions. However, this does not allow failure transitions, and preprocessing the WFSA to eliminate failure transitions could greatly increase $|E|$ . We extend the backward algorithm to handle failure transitions directly. Our approach is efficient when the average state has outgoing arcs for only a small fraction $s \ll 1$ of the alphabet $\Sigma$ . We propose an algorithm for general acyclic WFSAs which runs in $\mathcal{O}(|E| + s|\Sigma||Q||\mathcal{T}_{\mathrm{max}}|\log |\Sigma|)$ where $Q$ is the set of states and $|\mathcal{T}_{\mathrm{max}}|$ is the size of the largest connected component of failure transitions. When the failure transition topology satisfies a condition exemplified by CRFs, the $|\mathcal{T}_{\mathrm{max}}|$ factor can be dropped, and when the weight semiring is a ring, the log $|\Sigma|$ factor can be dropped. In the latter case (ring-weighted acyclic WFSAs), we also give an alternative algorithm with complexity $\mathcal{O}(|E| + |\Sigma||Q|\min(1,s|\pi_{\mathrm{max}}|))$ , where $|\pi_{\mathrm{max}}|$ is the size of the longest failure path. + +![](images/e0bacd8f5e0f30219d7f393f876887c1c9e2b541d0747d2b1a78bea8aac94e21.jpg) + +https://github.com/rycolab/failure-backward + +# 1 Introduction + +Weighted finite-state automata (WFSAs) are a common formalism in NLP. Many popular models are special cases, e.g., $n$ -gram language models (Brown et al., 1992), conditional random fields (CRFs: Lafferty et al., 2001), maximum-entropy Markov models (McCallum et al., 2000), and semi-Markov models (Sarawagi and Cohen, 2004). In current practice, the weights in the WFSAs are often derived from a neural network, and neuralized WFSAs constitute the state of the art + +on a variety of common tasks in NLP (Rastogi et al., 2016; Schwartz et al., 2018; Lin et al., 2019; Jiang et al., 2021; Rijhwani et al., 2021; Alon et al., 2022). WFSAs are also increasingly being used for the design (Shen et al., 2019; Schwartz et al., 2018) and analysis (Peng et al., 2018; Hewitt et al., 2020; Hahn, 2020; Chiang and Cholak, 2022) of neural architectures. + +Failure transitions are a useful augmentation of standard WFSAs. First introduced in the context of string matching (Aho and Corasick, 1975), they can be used to represent backoff $n$ -gram language models (Allauzen et al., 2003), higher-order CRFs, and variable-order CRFs (VoCRFs; Vieira et al., 2016) in a more compact way. They represent "default" transitions out of states when no other transition is possible. For example, in backoff $n$ -gram language models, a weighted failure transition from a higher-order history to a lower-order history (e.g., from a 4-gram to a 3-gram) is used to back off before reading a word that was rarely observed with the higher-order history, so that it was not worth including a dedicated transition for that word. + +The pathsum computes the total weight of all the paths in a WFSA graph, where the weights may fall in any semiring. Examples include finding the highest-weighted path for Viterbi decoding, computing the posterior margins (inference) in hidden Markov models, and computing the normalizing constant in CRFs. The pathsum is particularly efficient to compute in acyclic WFSAs with the backward algorithm, whose runtime is $\mathcal{O}(|E|)$ . However, the special semantics of failure transitions mean that the ordinary backward algorithm cannot be applied (nor can the forward algorithm). Failure transitions must first be replaced by normal ones (Alg. 2 below), resulting in the failure-expanded transition set $\overline{E}$ , which can contain up to $|Q|^2 |\Sigma|$ transitions. Replacing failure transitions, therefore, undoes the compaction + +afforded by them. This is especially expensive $(|\overline{E} |\gg |E|)$ for backoff language models, for example, where each of the many 4-gram states only has explicit transitions in $E$ for symbols $a$ that were observed in training data to follow that 4-gram, but has transitions in $\overline{E}$ for every $a\in \Sigma$ . For example, Penn Treebank tagging has $|\Sigma | = 36$ and Czech morphological tagging has $|\Sigma | > 1000$ (Hajic and Hladká, 1998). While Allauzen et al. (2003) present an $\mathcal{O}(n^2 |\Sigma ||Q|)$ method to preprocess a (possibly cyclic) $n$ -gram language model WFSA with failure transitions such that the pathsum remains identical, their method only applies to the case of the tropical semiring. + +In this paper, we study the problem of efficiently computing the pathsum in WFSAs with failure transitions over general semirings. We specifically focus on acyclic WFSAs, $^{2}$ introducing several algorithms, all based on the backward algorithm, that take advantage of the compact structure induced by the failure transitions. Our improvements are strongest for WFSAs that are sparse in a sense to be defined shortly. We summarise our contributions as follows: + +- We present simple baseline algorithms using failure transition removal (§3.1) and memoization (§3.2). +- We present an algorithm for computing the pathsum of ring-weighted WFSAs, utilizing subtraction (§4). +- With some extra work to avoid subtraction (§5), we extend the algorithm to general semirings (§6). + +# 2 Preliminaries + +This section defines WFSAs, the pathsum problem, the backward algorithm, and failure transitions. + +Definition 1. A semiring is a 5-tuple $\mathcal{W} = (\mathbb{K},\oplus ,\otimes ,\mathbf{0},\mathbf{1})$ where $\mathbb{K}$ is a set equipped with operations $\oplus$ and $\otimes$ , s.t. $(\mathbb{K},\oplus ,\mathbf{0})$ is a commutative + +monoid, $(\mathbb{K},\otimes ,\mathbf{I})$ is a monoid, $\oplus$ distributes over $\otimes$ , and $\mathbf{0}$ annihilates $\otimes$ . + +Definition 2. A weighted finite-state automaton (WFSA) is a 5-tuple $\mathcal{A} = \langle \Sigma, Q, E, \lambda, \rho \rangle$ , where $\Sigma$ is a finite alphabet, $Q$ a finite set of states, $E$ a collection of transitions in $Q \times \Sigma \times \mathbb{K} \times Q$ , $\lambda: Q \to \mathbb{K}$ the initial-state weighting function, and $\rho: Q \to \mathbb{K}$ the final-state weighting function. + +To improve readability, we render a transition $(q,a,w,q^{\prime})$ as $q\stackrel {a / w}{\longrightarrow}q^{\prime}$ . We further define $E(q)\stackrel {\mathrm{def}}{=}\{e\mid \exists a,w,q^{\prime}:e = q\stackrel {a / w}{\longrightarrow}q^{\prime}\in E\}$ as the set of outgoing transitions of $q\in Q$ , and $E(q,a)$ as those labeled with $a\in \Sigma$ $\Sigma (q)\stackrel {\mathrm{def}}{=}\{a\mid E(q,a)\neq$ $\varnothing \}$ denotes the set of transition labels in $E(q)$ + +Importantly, we will assume that the graph $(Q,E)$ is acyclic (see footnote 2). Less importantly, our definition of WFSAs does not allow $\varepsilon$ -transitions, assuming that they have been eliminated in advance (Mohri, 2002a), which is easy in the acyclic case. Our runtime analyses assume for simplicity that $(i)$ the graph is connected (implying $|E|\geq |Q| - 1$ ) and $(ii)$ that for each $q,q^{\prime}\in Q$ $E$ contains at most one transition $q\xrightarrow{a/w}q^{\prime}$ for any $a\in\Sigma$ . This can always be achieved by replacing "parallel" transitions $\left\{q\xrightarrow{a/w_i}q'\Bigg{|}i\right\} \subseteq E$ with $q\xrightarrow{a/\oplus_i w_i}q'$ . + +Definition 3. A path $\pi$ in a WFSA $\mathcal{A}$ is a sequence of consecutive transitions in $E$ , + +$q_{0}\xrightarrow{a_{1}/w_{1}}q_{1}\cdots q_{N-1}\xrightarrow{a_{N}/w_{N}}q_{N}.\mathrm{p}(\pi)\stackrel{\mathrm{def}}{=}q_{0}$ and $\mathrm{n}(\pi)\stackrel{\mathrm{def}}{=}q_N$ refer to the initial and final states of $\pi$ , respectively. $\Pi(\mathcal{A})$ denotes the set of all paths in $\mathcal{A}$ . + +Definition 4. The inner path weight is defined as $\mathrm{w_I}(\pi) \stackrel{\mathrm{def}}{=} \bigotimes_{n=1}^{N} w_n$ and the (full) path weight as $\mathrm{w}(\pi) \stackrel{\mathrm{def}}{=} \lambda(p(\pi)) \otimes \mathrm{w_I}(\pi) \otimes \rho(n(\pi))$ . + +Definition 5. The pathsum of $\mathcal{A}$ is defined as + +$$ +\mathbf {Z} (\mathcal {A}) \underset {\pi \in \Pi (\mathcal {A})} {\stackrel {{\mathrm {d e f}}} {=}} \bigoplus_ {\pi \in \Pi (\mathcal {A})} \mathrm {w} (\pi). \tag {1} +$$ + +The problem of computing the pathsum is sometimes also referred to as the generalized shortest-distance problem (Mohri, 2002b). + +Definition 6. The backward value $\beta(q)$ of a state $q \in Q$ is the sum of the inner weights of all paths $\pi$ starting at $q$ right-multiplied by $\rho(n(\pi))$ , i.e., + +$$ +\begin{array}{l} \beta (q) \stackrel {\text {d e f}} {=} \bigoplus \mathrm {w} _ {\mathrm {I}} (\pi) \otimes \rho (\mathrm {n} (\pi)). \tag {2} \\ \pi \in \Pi (\mathcal {A}), \\ \mathrm {p} (\pi) = q \\ \end{array} +$$ + +We extend this definition to state-symbol pairs $(q,a)\in Q\times \Sigma$ as + +$$ +\beta (q, a) \stackrel {\text {d e f}} {=} \bigoplus w \otimes \beta \left(q ^ {\prime}\right). \tag {3} +$$ + +$$ +q \xrightarrow {a / w} q ^ {\prime} \in E +$$ + +The value $\beta (q,a)$ can be seen as the result of restricting the paths contributing to $\beta (q)$ to those starting with $a\in \Sigma$ + +For $S \subseteq \Sigma$ we also define + +$$ +\beta (q, S) \stackrel {\text {d e f}} {=} \bigoplus_ {a \in S} \beta (q, a). \tag {4} +$$ + +Notice that $\beta (q) = \rho (q)\oplus \beta (q,\Sigma (q))$ + +Naively computing the pathsum by enumerating all $\pi \in \Pi (\mathcal{A})$ in an acyclic WFSA would result in an exponential runtime. However, algebraic properties of semirings allow for faster algorithms (Mohri, 2002b). An example is the backward algorithm, a dynamic program which computes backward values and the pathsum in acyclic WFSAs in time $\mathcal{O}(|E|)$ . It exploits the fact that, in acyclic WFSAs, $Q$ can always be topologically sorted and the backward values can be computed in reverse topological order. This guarantees that the backward values of $q$ 's children will have been computed by the time we expand $q$ , meaning that $\beta (q)$ can be computed as + +$$ +\beta (q) \leftarrow \rho (q) \oplus \bigoplus w \otimes \beta \left(q ^ {\prime}\right) \tag {5} +$$ + +$$ +q \xrightarrow {a / w} q ^ {\prime} \in E +$$ + +The pseudocode is given in Alg. 1. All our algorithms are based on the backward algorithm. + +# Algorithm 1 + +1: def Backward(A): +2: for $q \in \text{ReverseTopological}(\mathcal{A})$ : + +3: $\beta (q,\Sigma (q))\gets \bigoplus w\otimes \beta (q^{\prime})$ + +$$ +q \xrightarrow {a / w} q ^ {\prime} \in E +$$ + +4: $\beta (q)\gets \rho (q)\oplus \beta (q,\Sigma (q))$ +5: return $\bigoplus_{q\in Q}\lambda (q)\otimes \beta (q)$ 4 equals Z(A) + +# 2.1 Failure Transitions + +We consider an extension of WFSAs where any state can have a single fallback state $q^{\phi}$ . + +Definition 7. A WFSA with failure transitions (WFSA- $\phi$ ) is a 6-tuple $\mathcal{A} = \langle \Sigma, Q, E, \lambda, \rho, \phi \rangle$ , where $\phi$ is a failure function—a partial function that maps some states $q \in Q$ to their fallback state $\phi(q) = q^{\phi}$ . + +Fallback states can be represented by transitions + +$q \xrightarrow{\phi / 1} q^{\phi}$ with a special meaning:3 they are only traversed upon reading a symbol $a \notin \Sigma(q)$ and thus represent a default option used when no ordinary transition is available.4 This formalization means that every state has at most one fallback state. + +We do not include $\phi$ in $\Sigma$ or $\phi$ -transitions in $E$ . We denote the set of $\phi$ -transitions as $E^{\phi}$ and assume that $E \cup E^{\phi}$ still forms an acyclic graph. + +$\phi$ -transitions can be explicitly represented in a normal WFSA by expansion of $\phi$ -transitions. + +Definition 8. Given an acyclic WFSA- $\phi$ $\mathcal{A} = \langle \Sigma ,Q,E,\lambda ,\rho ,\phi \rangle$ we introduce the recursively defined failure-expanded transition set as follows + +$$ +\bar {E} (q, a) \stackrel {\text {d e f}} {=} \left\{ \begin{array}{l l} E (q, a) & \text {i f} a \in \Sigma (q) \\ \bar {E} (q ^ {\phi}, a) & \text {i f} q \text {h a s} a \phi \operatorname {a r c} \\ \varnothing & \text {o t h e r w i s e} \end{array} \right. \tag {6} +$$ + +and the set $\overline{E} \subseteq Q \times \Sigma \times \mathbb{K} \times Q$ as the union of these sets over $Q$ and $\Sigma$ . + +$\overline{E} (q,a)$ is well-defined due to the assumed acyclicity of $E^{\phi}$ . It may be empty. $\overline{E}$ captures all "indirect" transitions which can be made across arbitrarily long paths of only $\phi$ transitions. $\overline{\Sigma} (q)$ , analogously to $\Sigma (q)$ , denotes the set of outgoing symbols for $q\in Q$ in the failure-expanded WFSA- $\phi$ + +Definition 9. We define the average out-symbol fraction $s$ of a WFSA as + +$$ +s = \underset {q \in Q} {\text {m e a n}} \frac {| \Sigma (q) |}{| \Sigma |} = \frac {\sum_ {q \in Q} | \Sigma (q) |}{| Q | | \Sigma |}. \tag {7} +$$ + +$s \in [0,1]$ is a measure of completeness of the WFSA. We correspondingly define $\overline{s}$ , the equivalent in the failure-expanded transition set $\overline{E}$ . + +We say informally that a WFSA is $\Sigma$ -sparse if $s \ll 1$ , so on average $|\Sigma(q)| \ll |\Sigma|$ . Intuitively, this means that the average state only has outgoing transitions on a few distinct symbols. We will show that the runtime tradeoff between our baseline pathsum algorithm MemoizationBackward (Alg. 3) and later algorithms depends on the difference between $s$ and $\overline{s}$ . Our algorithms are efficient when $s \ll \overline{s}$ : intuitively in the regime where failure expansion would add outgoing transitions for many new symbols. + +Correcting Eq. (5) to take $\phi$ -transitions into account, the backward values in a WFSA- $\phi$ can be computed as + +$$ +\beta (q) \leftarrow \rho (q) \oplus \bigoplus w \otimes \beta \left(q ^ {\prime}\right). \tag {8} +$$ + +$$ +q \xrightarrow {a / w} q ^ {\prime} \in \overline {{E}} +$$ + +Importantly, the following equality holds + +$$ +\beta (q, a) = \left\{ \begin{array}{l l} \bigoplus_ {q \xrightarrow {a / w} q ^ {\prime} \in E} w \otimes \beta \left(q ^ {\prime}\right) & \text {i f} a \in \Sigma (q) \\ \beta \left(q ^ {\phi}, a\right) & \text {o t h e r w i s e} \end{array} \right. \tag {9} +$$ + +This follows straight from the definition of $\phi$ . It states that the backward values of the state-symbol pairs $(q, a)$ in WFSA- $\phi$ equal the ones in a normal WFSA if an $a$ -labeled transition can be taken; if not, the backward value is inherited from the fallback state, since the 1-weighted $\phi$ -transition is taken. + +Connected components of the graph formed by $\phi$ -transitions of a WFSA- $\phi$ are trees (specifically, anti-arborescences) since a state can have at most one outgoing $\phi$ -transition and the WFSA is acyclic. This motivates the following definition. + +Definition 10. Let $\mathcal{A}$ be an acyclic WFSA- $\phi$ . A failure tree $\mathcal{T}$ is a connected component of the graph formed by $\phi$ -transitions of $\mathcal{A}$ . + +An example of a failure tree $\mathcal{T}$ is shown in Fig. 1a. We write $|\mathcal{T}|$ for the number of states in $\mathcal{T}$ , with $|\mathcal{T}_{\mathrm{max}}|$ being the number of states in the largest failure tree and $|\pi_{\mathrm{max}}|$ the number of states in the longest failure path. + +$\mathcal{T}_q$ denotes the failure tree containing $q \in Q$ . We write $q \prec q'$ to say that $q$ is a proper ancestor of $q'$ in $\mathcal{T}_q$ , i.e., there is a non-empty $\phi$ -path from $q$ to $q'$ . + +# 3 Expanding Failure Transitions + +The pathsum of a WFSA- $\phi$ can be naively computed by replacing the $\phi$ -transitions with normal ones according to the semantics of the $\phi$ -transitions and running the backward algorithm on the expanded WFSA. Before introducing our contributions, we present this method for pedagogical purposes. While this solution is near-optimal for non- $\Sigma$ -sparse WFSAs, it can be improved for certain $\Sigma$ -sparse WFSAs. + +# 3.1 Expanding Failure Transitions + +Failure expansion is a transformation of an acyclic WFSA- $\phi$ which replaces the $\phi$ -transitions while retaining acyclicity. See Alg. 2 for the pseudocode, Fig. 1b for an example of failure expansion, and App. A for an example of how the backward algorithm operates in this setting. + +# Algorithm 2 + +1: def FailureExpansion $(\mathcal{A})$ .. +2: $\overline{E} \gets E$ +$\triangleright$ Will be updated +3: for $q \in \text{ReverseTopological}(E^{\phi})$ : +4: $\overline{E} \gets \overline{E} \cup \{q \xrightarrow{a/w} q'|$ +5: $q^{\phi} \xrightarrow{a/w} q' \in \overline{E}, a \notin \Sigma(q)\}$ +6: return $\overline{E}$ + +![](images/097788c9b9cb8f8dc172203fc97d9454e1d3246784b56804faa6aaf26731ea26.jpg) +(a) Example of a failure tree. Its root is node 4. + +![](images/5c3d63e8283b45f06a351919c891b1a37224a74528a131c657515c5475ba1db5.jpg) +(b) To expand failure transitions, the dashed transitions are added and the $\phi$ -transitions are removed. + +In deterministic WFSA- $\phi$ 's, failure expansion adds $\mathcal{O}(|Q||\Sigma|)$ new transitions, since each of the $|Q|$ states $q$ gains new transitions to each of $q^{\phi}$ 's children. More precisely, $q$ gains up to $|\overline{\Sigma}(q) \setminus \Sigma(q)|$ transitions, where $\overline{\Sigma}(q) \stackrel{\mathrm{def}}{=} \{a : \overline{E}(q^{\phi}, a) \neq \emptyset\}$ denotes the set of symbols on the outgoing transitions from $q^{\phi}$ in the failure-expanded automaton. In non-deterministic WFSA- $\phi$ 's, however, $q$ can gain up to $|Q||\overline{\Sigma}(q) \setminus \Sigma(q)|$ transitions, since $q^{\phi}$ might have up to $|Q|$ a-labeled transitions $\forall a \in \Sigma$ . This results in $\mathcal{O}(|Q|^2|\Sigma|)$ new transitions. Following the derivation in App. B, the runtime of the backward algorithm on the $\phi$ -expanded WFSA- $\phi$ is therefore $\mathcal{O}(|\overline{E}|) = \mathcal{O}(|E| + |Q|^2(\overline{s} - s)|\Sigma|)$ in the general case and $\mathcal{O}(\overline{E}) = \mathcal{O}(|E| + |Q|(\overline{s} - s)|\Sigma|)$ in deterministic WFSAs. + +# 3.2 Decomposing the Backward Values + +The algorithms we present in later sections sidestep the need to materialize all additional transitions replacing the failure transition. They are based on + +a decomposition of the backward values into two components: the local and the failure component. Using Eq. (4), we can split $\beta(q)$ into + +$$ +\beta (q) = \rho (q) \oplus \beta (q, \Sigma) \tag {10} +$$ + +$$ +\beta (q, \Sigma) = \underbrace {\beta (q , \Sigma (q))} _ {\text {l o c a l}} \oplus \underbrace {\beta (q , \Sigma \setminus \Sigma (q))} _ {\text {f a i l u r e}}. \tag {11} +$$ + +The two terms on the right hand-side of Eq. (11) can be further expanded as + +$$ +\beta (q, \Sigma (q)) = \bigoplus_ { \begin{array}{c} q \xrightarrow {a / w} q ^ {\prime} \in E \\ \end{array} } w \otimes \beta \left(q ^ {\prime}\right) \tag {12} +$$ + +$$ +\beta (q, \Sigma \setminus \Sigma (q)) = \bigoplus_ {b \in \Sigma \setminus \Sigma (q)} \beta (q ^ {\phi}, b) \tag {13} +$$ + +except that the second term is $\mathbf{0}$ if $q$ has no failure transition (in which case $q^{\phi}$ is not defined). $\beta (q,\Sigma (q))$ is exactly the quantity computed by Alg. 1 on line 3; our modifications never change this computation. Rather, all of our algorithms seek to simplify the computation of $\beta (q,\Sigma \setminus \Sigma (q))$ . + +Eq. (13) makes it possible to avoid failure expansion by storing not only $\beta(q)$ but also the values $\beta(q, a)$ at each state $q$ . Since $q^{\phi}$ will then memoize all needed $\beta(q^{\phi}, b)$ values, the sum (13) becomes easy to compute for any $q$ that may back off to $q^{\phi}$ . Passing the summand $\beta(q^{\phi}, b)$ back to $q$ is cheaper than passing back all of the arcs $q^{\phi} \xrightarrow{b/w} q' \in \overline{E}$ that contribute to that summand, as Alg. 2 does: a nondeterministic WFSA may have multiple such arcs. The pseudocode of this modification is presented in Alg. 3. Notice the additional term $\beta(q, \Sigma \setminus \Sigma(q))$ on line 10 in Alg. 3, which was not needed in the backward algorithm for ordinary WFSAs. See App. A for a guided example on a small WFSA. + +In the general case of non-deterministic WFSAs, failure expansion may have to loop over as many as $|Q||\Sigma \setminus \Sigma (q)|$ transitions at each state $q$ . Alg. 3 reduces this to a loop over $|\Sigma \setminus \Sigma (q)|$ symbols, which is $(\overline{s} -s)|\Sigma |$ on average. The full complexity of Alg. 3 is then $\mathcal{O}\big(|E| + (\overline{s} -s)|\Sigma ||Q|\big)$ (similarly to App. B). + +The shortcoming of Alg. 3 is that $(\overline{s} - s)|\Sigma|$ may still be large. The terms $\beta(q^{\phi}, b)$ must be individually copied back to $q$ as $\beta(q, b)$ for each $|\Sigma \setminus \Sigma(q)|$ . Our proposed algorithms in the following subsections avoid the overhead incurred by this copying. + +# 4 An Algorithm with Subtraction + +Alg. 3 computes $\beta (q)$ in part by summing up to $|\Sigma \setminus \Sigma (q)|$ values passed back from $q^{\phi}$ . This section presents a more efficient algorithm for ring- + +# Algorithm 3 + +$$ +4: \qquad \beta (q, a) \leftarrow \bigoplus_ { \begin{array}{c} q \xrightarrow {a / w} q ^ {\prime} \in E \end{array} } w \otimes \beta (q ^ {\prime}) +$$ + +1: def MemoizationBackward( $\mathcal{A}$ ): +2: for $q \in \text{ReverseTopological}(\mathcal{A})$ : +3: for $a \in \Sigma(q)$ : +5: $\beta (q,\Sigma \setminus \Sigma (q))\gets 0$ +6: if $q$ has a fallback state : +7: for $b \in \Sigma \setminus \Sigma(q)$ : +8: $\beta (q,b)\gets \beta (q^{\phi},b)$ +9: $\beta (q,\Sigma \setminus \Sigma (q))\oplus = \beta (q^{\phi},b)$ +10: $\beta (q,\Sigma)\gets \beta (q,\Sigma (q))\oplus \beta (q,\Sigma \setminus \Sigma (q))$ +11: $\beta (q)\gets \rho (q)\oplus \beta (q,\Sigma)$ +12: return $\oplus_{q\in Q}\lambda (q)\otimes \beta (q)$ + +weighted $\Sigma$ -sparse WFSAs. As rings allow subtraction, we can compute the failure term as follows: + +$$ +\beta (q, \Sigma \setminus \Sigma (q)) = \beta \left(q ^ {\phi}, \Sigma\right) \ominus \beta \left(q ^ {\phi}, \Sigma (q)\right) \tag {14} +$$ + +Recall that $\beta(q^{\phi}, \Sigma(q)) \stackrel{\mathrm{def}}{=} \oplus_{a \in \Sigma(q)} \beta(q^{\phi}, a)$ by (4). Thus Eq. (14) effectively uses $|\Sigma(q)|$ subtractions (for $a \in \Sigma(q)$ ), whereas Eq. (13) used $|\Sigma \setminus \Sigma(q)|$ additions (for $b \in \Sigma \setminus \Sigma(q)$ ). In the runtime analysis, these subtractions are already covered by the $\mathcal{O}(|\Sigma(q)|)$ runtime needed for the $|\Sigma(q)|$ additions in Eq. (12). Overall, Eqs. (11), (12) and (14) compute $\beta(q, \Sigma)$ in Eq. (11) by combining $\oplus$ and $\ominus$ to replace just $|\Sigma(q)|$ of the summands of $\beta(q^{\phi}, \Sigma)$ —namely, those overridden at $q$ . + +But how fast is it to find the subtrahends $\beta(q^{\phi}, a)$ for $a \in \Sigma(q)$ ? Eagerly storing $\beta(q, a)$ (if non- $\mathbf{0}$ ) for every $q \in Q$ , $a \in \Sigma$ (in case it is needed during backoff) would allow constant-time lookup, but doing so would require copying $\beta(q^{\phi}, b)$ backward to $\beta(q, b)$ for all $b \in \Sigma \setminus \Sigma(q)$ , just as in Alg. 3, which would incur the same complexity of $\mathcal{O}(|\Sigma \setminus \Sigma(q)|)$ . So instead of computing and storing the full set of $\beta(q^{\phi}, a)$ values $\forall a \in \Sigma$ , we will compute on demand only the ones that need replacement. This involves following $\phi$ -arcs forward until we find an $a$ arc, or run out of $\phi$ arcs, or encounter a memo because $\beta(q^{\phi}, a)$ was already needed by a different ancestor of $q^{\phi}$ in its failure tree. The full algorithm is presented as Alg. 4.5 + +Algorithm 4 + +1: def RingBackward(A): +2: return $\bigoplus_{q\in Q}\lambda (q)\otimes \beta (q)$ +3: def $\beta (q)$ .. +4: return $\beta (q)\gets \rho (q)\oplus \beta (q,\Sigma)$ +5: def $\beta (q,\Sigma)$ .. +6: $\beta (q,\Sigma (q))\gets \oplus_{a\in \Sigma (q)}\beta (q,a)$ +7: if $q$ has no fallback state: return $\beta(q, \Sigma(q))$ +8: $\beta(q^{\phi}, \Sigma(q)) \gets \oplus_{a \in \Sigma(q)} \beta(q^{\phi}, a)$ +9: return + +$$ +\beta (q ^ {\phi}, \Sigma) \oplus (\beta (q, \Sigma (q)) \ominus \beta (q ^ {\phi}, \Sigma (q))) +$$ + +10: def $\beta (q,a)$ .. + +Memoizes its result + +11: if $a \in \Sigma(q): \text{return} \bigoplus_{\substack{q \xrightarrow{a/w} q' \in E}} w \otimes \beta(q')$ +12: else if $q$ has a fallback state: return $\beta(q^{\phi}, a)$ +13: else return 0 + +# 4.1 Runtime + +The runtime of Alg. 4 is on the order of the number of calls to line 10, plus $|E|$ to cover all the sums in line 11 (which executes at most once for each $q, a$ pair, thanks to memoization). Every $a \in \Sigma(q)$ results in two such calls, at lines 6 and 8; there is also a possible recursive call at line 12 if $a \in \Sigma(q')$ for at least one proper ancestor $q' \prec q$ in the failure tree (thanks to memoization, this happens at most once per $q, a$ pair, even if there are multiple choices of $q'$ ). Thus, the overall runtime is $\mathcal{O}\left(|E| + \sum_{q \in Q} |\hat{\Sigma}(q)|\right)$ , where $\hat{\Sigma}(q) \subseteq \Sigma$ is defined as $\bigcup_{q' \leq q} \Sigma(q')$ . A looser bound written in terms of $s$ is $\mathcal{O}\left(|E| + |\Sigma||Q| \min(1, s|\pi_{\max}|)\right)$ . + +We will revisit ring-weighted WFSA- $\phi$ 's in $\S 6.4$ + +# 5 Incrementally Modified Aggregator + +The point of Alg. 4 line 9 is to replace some summands of $\beta(q^{\phi}, \Sigma)$ to get $\beta(q, \Sigma)$ . When no subtraction operator $\Theta$ is available (e.g., if $\oplus = \max$ ), we can use an aggregation data structure that is designed to efficiently replace individual summands in a sum without using subtraction. For example, a Fenwick tree (Fenwick, 1994) can replace a summand and recompute the sum in $\mathcal{O}(\log N)$ + +time, where $N$ is the number of summands. (Fenwick trees are similar to binary heaps; they are reviewed in App. C.) Here we merely give the interface to aggregators: + +1: class Aggregator(): $\triangleright$ We use $\gamma$ to refer to an aggregator instance + +2: def set(a: $\Sigma$ , $v$ : $\mathbb{K}$ ) + +$\triangleright$ Updates $\gamma (a)\gets v$ + +3: def get(a: $\Sigma \rightarrow \mathbb{K}$ + +$\triangleright$ Returns $\gamma (\alpha)$ (default $\theta$ + +4: def value() $\rightarrow$ K + +$\triangleright$ Returns $\bigoplus_{a\in \Sigma}get(a)$ + +5: defundo(n:N) + +$\triangleright$ Reverts the last $n$ updates + +We will represent each sum $\beta (q,\Sigma)$ in Alg. 4 as the total value of an aggregator that stores summands $\beta (q,a)$ for $a\in \Sigma$ . In principle, this aggregator could be obtained by copying the aggregator for $\beta (q^{\phi},\Sigma)$ and then modifying some summands (see line 9). However, aggregators are not constant-size data structures, so creating all of these slightly different aggregators would be expensive. + +Instead, our strategy will be to use just a single aggregator, for the "current" state $q$ , and make small modifications as we visit different states $q'$ . More precisely, we have one aggregator $\gamma$ per failure tree, first created at the tree's root. When we step backwards in the failure tree, say from $q^{\phi}$ to $q$ , we modify "just a few" summands in $\gamma$ so that $\beta(q, a)$ replaces $\beta(q^{\phi}, a)$ for $a \in \Sigma(q)$ . This is fast if $\Sigma(q)$ is small. We can now obtain $\beta(q, \Sigma)$ as the aggregator's new total value. To visit other ancestors of $q^{\phi}$ , we must first move forward to $q^{\phi}$ again, which we do by reverting the modifications. + +Definition 11. Aggregator $\gamma$ represents $q\in Q$ if + +$$ +\gamma (a) = \beta (q, a), \forall a \in \Sigma +$$ + +$\gamma$ will be updated to represent different states in the failure tree at different times. When $\gamma$ represents $q$ , it holds that $\beta(q) = \rho(q) \oplus \gamma.\text{value()}$ , by (10). + +Updates are carried out by the methods in Alg. 5, which move backward and forward in a failure tree. When $\gamma$ represents $q^{\phi}$ , we can call $\operatorname{Visit}(\gamma, q)$ to update $\gamma$ so that it represents $q$ . At any later time when $\gamma$ again represents $q$ , we can call $\operatorname{Leave}(\gamma, q)$ to undo this update, so that $\gamma$ again represents $q^{\phi}$ . + +Each Visit $(\gamma, q)$ or Leave $(\gamma, q)$ call runs in time $\mathcal{O}\big(|\Sigma(q)| \log |\Sigma|\big)$ , since it sets $|\Sigma(q)|$ values in $\gamma$ .8 + +# Algorithm 5 + +1: def Visit(γ, q): ▷ update γ that represented qφ to represent q +2: for $a \in \Sigma(q)$ : +3: $\gamma .\mathrm{set}(a,\beta (q,a))\triangleright$ Use the memoizing $\beta (q,a)$ from Alg.4 +4: def Leave $(\gamma ,q)$ .. update $\gamma$ that represented $q$ to represent $q^{\phi}$ +5:undo $\left(|\Sigma (q)|\right)$ 1 revert all the updates made by Visit + +Note that $\operatorname{Visit}(\gamma, q)$ accomplishes the same goal as Alg. 4 line 9, but with an extra runtime factor of $\mathcal{O}(\log |\Sigma|)$ to avoid subtraction. It may still be faster than the $\mathcal{O}(|\Sigma \setminus \Sigma(q)|)$ runtime of Alg. 3 line 10, when $|\Sigma(q)|$ is quite small relative to $|\Sigma|$ . + +# 6 A General Backward Algorithm for Acyclic WFSAs with Failure Transitions + +Alg. 6 is our most general version of the backward algorithm for computing the pathsum of an acyclic WFSA- $\phi$ . It makes use of the Aggregator and pseudocode from the previous section. + +# Algorithm 6 + +1: def GeneralBackward(A): +2: for $q \in \text{ReverseTopological}(\mathcal{A})$ : +3: $\mathcal{T}\gets \mathcal{T}_q$ +4: if $q$ has no fallback state: $\triangleright_{q}$ is root of failure tree +5: $\gamma_{\mathcal{T}} \gets$ new Aggregator() $\triangleright$ New empty aggregator +6: Visit $(\gamma_{\mathcal{T}}, q)$ Initialize $\gamma_{\mathcal{T}}$ +7: $q\tau = q$ $\triangleright$ Remember the state represented by $\gamma_{\mathcal{T}}$ +8: while $q_{\mathcal{T}}$ is not a descendant of $q$ in $\mathcal{T}$ : +9: Leave $(\gamma_{\mathcal{T}},q_{\mathcal{T}})$ . $q_{\mathcal{T}}\gets q_{\mathcal{T}}^{\phi}$ Descend in T +10: Visit $^{+}$ ( $\gamma_{\mathcal{T}}, q, q_{\mathcal{T}}$ ); $q_{\mathcal{T}} \gets q$ $\triangleright_{\text{Ascend in } \mathcal{T}}$ +11: $\triangleright$ Now $\gamma_{\mathcal{T}}$ represents $q$ (thanks to all of the above) +12: $\beta (q)\gets \rho (q)\oplus \gamma_{\mathcal{T}}.\mathrm{value}()$ +13: return $\bigoplus_{q\in Q}\lambda (q)\otimes \beta (q)$ +14: def Visit $^{+}(\gamma, q, q')$ : $\triangleright$ update $\gamma$ that represented $q'$ to represent $q$ +15: if $q^{\phi} \neq q': \mathrm{Visit}^{+}(\gamma, q^{\phi}, q')$ +16: Visit $(\gamma, q)$ + +Like Alg. 3, this computes $\beta(q)$ at all states in reverse topological order. However, it attempts to share work among states $q$ in the same failure tree $\mathcal{T}$ , by having them share an aggregator $\gamma_{\mathcal{T}}$ that currently represents some state $q_{\mathcal{T}} \in \mathcal{T}$ (in the sense of definition 11). The algorithm updates the aggregator to represent $q$ , by descending in the failure tree until it reaches a common descendant, and then ascending again until it reaches $q$ . + +$\mathcal{O}(\log \max_{q\in Q}|\overline{\Sigma} (q)|)$ ; see App. C for details. + +To make line 8 efficient, we preprocess each failure tree by visiting its states in depth-first order and annotating each state with the time interval during which it is on the stack. The loop at line 8 continues until the $q_{\mathcal{T}}$ interval contains the $q$ interval. + +# 6.1 Runtime + +As in Alg. 4 (see §4.1), $\mathcal{O}(|E|)$ runtime is needed to sum over the non-failure transitions from each state. The rest of the runtime is dominated by the calls to Visit and Leave. Recall from §5 that visiting or leaving $q$ takes time $\mathcal{O}(|\Sigma(q)|\log |\Sigma|)$ . Since a state can be left at most once for each time it is visited, it suffices to count just the visits. + +The number of visits to each state depends on the (reverse) topological order used at line 2. In the best case, $q$ iterates over the states of each failure tree in depth-first order, starting at the root. Then Visit is called only on the current iterate $q$ — either as a root (line 6) or as a parent (line 16). Since each state is Visited exactly once, the total runtime is $\mathcal{O}(|E| + \sum_{q \in Q} |\Sigma(q)| \log |\Sigma|)$ . In the worst case, however, each $q$ at line 2 is far in the failure tree from the previous one, forcing $q_{\mathcal{T}}$ to descend all the way to the root and then ascend again to $q$ . This means line 16 visits all states $q'$ for which $q \preceq q'$ . The total runtime is therefore $\mathcal{O}(|E| + \sum_{q' \in Q} |\Sigma(q')| \operatorname{ancs}(q') \log |\Sigma|)$ , where $\operatorname{ancs}(q') \stackrel{\mathrm{def}}{=} | \{q : q \preceq q'\} |$ is the number of ancestors of $q'$ in the failure tree. Renaming the summation variable, we get $\mathcal{O}(|E| + \sum_{q \in Q} |\Sigma(q)| \operatorname{ancs}(q) \log |\Sigma|)$ .10 + +We can get a simpler but looser worst-case bound by increasing $\mathrm{ancs}(q)$ to $|\mathcal{T}_{\mathrm{max}}|$ , the maximum size of any failure tree. Rewriting this in terms of $s$ , we have bounded the runtime by $\mathcal{O}(|E| + s|\Sigma||Q||\mathcal{T}_{\mathrm{max}}|\log |\Sigma|)$ , where, however, in the best case we avoid the $|\mathcal{T}_{\mathrm{max}}|$ factor. + +The worst-case behavior is illustrated by Fig. 2, where the only possible topological order is $1,2,3,4,5,\ldots$ . When line 2 iterates over state 5 immediately after state 4, the aggregator must transition $4 \xrightarrow{\text{Leave}} 2 \xrightarrow{\text{Leave}} 1 \xrightarrow{\text{Visit}} 3 \xrightarrow{\text{Visit}} 5$ . Note that this involves 2 Visits, as 2 is the height of state 5. + +![](images/b396404f2ad55e5bd2dc93a6d00dd7f21c0f3f400c503187b26f61cf203f2989.jpg) +Figure 2: A WFSA- $\phi$ fragment in which Alg. 6 would perform a large number of updates over the $\phi$ -transitions. + +If the $a$ arcs were not present in Fig. 2, however, then $1, 2, 4, \ldots, 3, 5, \ldots$ would also be a topological order, which achieves the best-case behavior of visiting each state only once. Indeed, many topological orders would be available—some more efficient than others. + +# 6.2 Topological Sorting Heuristics + +It is desirable to choose a good topological order when one is available. In particular, the "best-case" scenario above is achieved under this condition: + +Definition 12. Let $\mathcal{A}$ be an acyclic WFSA- $\phi$ . Given a reverse topological order of the states, we say that $q$ completely precedes $q'$ if $q$ and all its failure-tree ancestors precede $q'$ and all its failure-tree ancestors. We say that the order is compatible with the failure trees of $\mathcal{A}$ if whenever $q, q'$ are in the same failure tree11 but have disjoint sets of ancestors, either $q$ completely precedes $q'$ or vice-versa. + +To put this another way, a compatible order of the WFSA states may jump back and forth among failure trees, as needed to achieve a topological ordering, but each failure tree's states will appear in some depth-first order starting at the tree's root, which ensures that each state is Visited just once. + +In some backoff architectures such as variable-order conditional random fields (Vieira et al., 2016), it is easy to find a compatible order. In these WFSAs, each failure tree is associated with a position in a fixed input sentence. Simply visit the failure trees from right to left, enumerating each one's states in depth-first order starting at the root. + +For the general case, we have developed an topological sorting algorithm that will find a compatible order when one exists. We begin with Kahn's (1962) agenda-based algorithm for finding a reverse topological order. It places all states onto a very simple priority queue in which "ready" states are prioritized at the front of the queue. The next state $q$ to enumerate is obtained by popping this + +queue, and then the parents of $q$ (that is, its immediate predecessors in the WFSA graph) decrement their counts of unenumerated children (i.e., immediate successors). If a parent state's count reaches 0, then it becomes ready and moves to the front portion of the queue. If the algorithm ever pops a non-ready state, then it throws an exception saying that the WFSA was cyclic. + +Our approach is to modify Kahn's algorithm so as to break ties. Once $q$ 's children have been enumerated, Kahn's algorithm is allowed to enumerate $q$ at any time, but our modified version prefers to wait until it would be possible to enumerate $q$ and (eventually) its failure-tree ancestors with a single Visit each. Unfortunately, this test is expensive, $^{12}$ so using it would not actually speed up Alg. 6. We therefore omit the details here. + +In practice, we recommend using a greedy version of the above algorithm. We do wait to enumerate $q$ until $q$ can be enumerated with a single Visit, but we no longer worry about its ancestors. This greedy heuristic is still guaranteed to find a compatible order if the WFSA has the special property that there are no paths between states in the same failure tree (other than $\phi$ -paths). Variable-order CRFs do have this property. Fig. 2 does not. + +Specifically, we say that a not-yet-enumerated state $q \in \mathcal{T}$ is cheap if it is a $\phi$ -parent of the current $q_{\mathcal{T}}$ (that is, $q^{\phi} = q_{\mathcal{T}}$ ), so that Alg. 6 only has to call $\mathrm{Visit}(\gamma_{\mathcal{T}}, q)$ to update $q_{\mathcal{T}} \gets q$ . Modify Kahn's algorithm to prioritize cheap ready states ahead of expensive ready states. $^{13}$ Modify Alg. 6 to repeatedly descend at the end of the main loop until $q_{\mathcal{T}}$ has at least one unenumerated $\phi$ -parent, $^{14}$ ensuring that there is a new cheap state in $\mathcal{T}$ . The hope is that this cheap state will become ready while it is still cheap (indeed, it may already be ready). + +# 6.3 Copying Aggregators + +Long Leave-Visit paths can trigger many updates to aggregator $\gamma_{\mathcal{T}}$ due to Such paths can be shortened by splitting the failure tree into multiple smaller trees, each with its own aggregator. When we Visit a + +state $q$ , we can choose to copy the aggregator from $q^{\phi}$ and update only the copy, leaving the old aggregator at $q^{\phi}$ . While this incurs a one-time copying cost, we can now split off the failure subtree rooted at $q$ into its own failure tree. Enumerating states in this subtree will now never require visiting $q$ 's descendants. The effect is to reduce $|\mathcal{T}_{\mathrm{max}}|$ in the analysis of §6.1. App. E presents + +- a dynamic splitting heuristic that is sensitive to the actual toposort order (§6.2) +- an static splitting algorithm that uses dynamic programming to choose the optimal set of split states to minimize a worst-case bound +- runtime analysis of an idealized case to show how Alg. 6 uses copying to gracefully degrade into Alg. 3 as the WFSA becomes denser + +# 6.4 The Ring Case + +In the case of a ring, it is possible to implement a faster aggregator. The aggregator still stores $N$ summands and their total, but no partial sums. It can replace a summand in time $O(1)$ rather than $O(\log N)$ , by subtracting off the old summand from the total and adding the new one. This eliminates the $\log |\Sigma|$ factor from the runtimes in §6.1. + +The resulting bound $\mathcal{O}(|E| + s|\Sigma||Q||\mathcal{T}_{\mathrm{max}}|)$ for Alg. 6 is still worse than §4.1's bound of $\mathcal{O}(|E| + |\Sigma||Q|\min (1,s|\pmb{\pi}_{\mathrm{max}}|))$ for Alg. 4. However, the former becomes better when a compatible order is known and the $|\mathcal{T}_{\mathrm{max}}|$ can be dropped. + +It is more instructive to compare the tighter bounds of $\mathcal{O}(|E| + \sum_{q' \in Q} |\Sigma(q')| \mathrm{ancs}(q'))$ for Alg. 6 and $\mathcal{O}(|E| + \sum_{q \in Q} |\hat{\Sigma}(q)|)$ for Alg. 4. If a compatible order is known, $\mathrm{ancs}(q')$ can be dropped and the former is better. If not, then either runtime could be better. The former effectively charges each state $q$ for all of the out-symbols at all of its $\phi$ -descendants $q'$ , while the latter charges $q$ for all of the distinct out-symbols at its $\phi$ -ancestors $q'$ . The reason for the difference: Both algorithms override a descendant's $a$ -arc with an ancestor's $a$ -arc, but to find these descendant-ancecestor pairs, Alg. 6 loops over $a$ -arcs at the descendant (pushing subtrahends up from below) while Alg. 4 loops over $a$ -arcs at the ancestor (pulling subtrahends up from above). When different descendant-ancecestor paths overlap, the former algorithm shares work between them if the Visit order is good, while the latter shares work between them via memoization. + +
Alg.O-CostUse case
Alg. 1(s-s)|Q||Σ||Q|never
Alg. 3(s-s)|Σ||Q|-
Alg. 4|πmax|s|Σ||Q|s << |s-s/|πmax|
Alg. 6+CUs|Σ||Q|s << s-s/CU
Alg. 6-|Tmax|CUs|Σ||Q|s << s-s/|Tmax|CU
App. E√CUs|Σ||Q|s << (s-s)2/CU
+ +Table 1: Runtime of computing the failure term by the different algorithms. The "use case" column indicates when an algorithm has better complexity than the baseline algorithm, Alg. 3. $C_U$ is the update complexity of the aggregator interface: $\log |\Sigma|$ in the general case (via a Fenwick tree) and 1 in the ring case of §6.4 (via subtraction). Alg. $6^+$ is the runtime for WFSA- $\phi$ 's such as VoCRFs where a compatible state order is known, whereas Alg. $6^-$ is the general worst-case runtime. + +# 7 Comparison of Algorithms + +This work proposed multiple algorithms for computing the pathsum of an acyclic WFSA- $\phi$ . They are all alternatives to running the backward algorithm (Alg. 1)—or its simple improvement by aggregation (Alg. 3)—after explicitly expanding failure transitions (Alg. 2). This section summarizes the improvements. + +As mentioned in §3.2, we never change the way the local component of the backward values is computed. All algorithms we consider therefore retain the $\mathcal{O}(|E|)$ complexity term from expanding the non- $\phi$ transitions. What differs is the method for computing the failure term $\beta (q,\Sigma \setminus \Sigma (q))$ -the contribution of the paths starting at $q$ that take $q$ 's failure transition. Table 1 compares this term's runtime complexity for all the algorithms discussed. + +Maintaining perspective, the benefits of our more sophisticated pathsum algorithms over the basic Alg. 3 only make an actual impact if Alg. 3's failure component complexity $\mathcal{O}\big((\overline{s} -s)|\Sigma ||Q|\big)$ is dominant over the local component $\mathcal{O}(|E|)$ , where $|E|\geq s|\Sigma ||Q|$ . In particular, reducing the failure component is only helpful if $\overline{s}\gg s$ , so that expanding failure transitions would make the graph denser. + +# 8 Conclusion + +We presented two new algorithms for more efficiently computing the backward values and pathsum of a sparse acyclic semiring-weighted FSA with $\phi$ -transitions, using the observation that a $\phi$ -transition from $q$ to $q^{\phi}$ means that $\beta(q)$ is a sparsely modified version of $\beta(q^{\phi})$ . We characterized when the new algorithms are asymptotically faster. + +$$ +\begin{array}{c} ^ {1 5} \text {S i n c e} \sum_ {q ^ {\prime} \in Q} | \Sigma (q ^ {\prime}) | \operatorname {a n c s} (q ^ {\prime}) = \sum_ {q ^ {\prime} \in Q} \sum_ {q \leq q ^ {\prime}} | \Sigma (q ^ {\prime}) | = \\ \sum_ {q \in Q} \sum_ {q ^ {\prime} \geq q} | \Sigma (q ^ {\prime}) |. \end{array} +$$ + +# Limitations + +This section addresses two main limitations of our work: the assumptions made on the structure of the WFSAs and the applicability of the proposed algorithms in real scenarios. + +Acyclicity assumption. We only consider acyclic WFSAs. While this covers interesting use cases such as CRFs, other commonly used instances of WFSAs also contain cycles, e.g., $n$ -gram language models. Furthermore, all our novel algorithms actually assume that $E \cup E^{\phi}$ is acyclic, whereas failure expansion only requires that the resulting $\overline{E}$ is acyclic. The former is a strictly stronger condition—see Fig. 5 below for an example WFSA- $\phi$ where $E \cup E^{\phi}$ is not acyclic, but $\overline{E}$ is. + +Applicability. As seen above, the runtime of Alg. 6 depends on the size of failure trees, with complexity $\mathcal{O}(|E| + s|\Sigma||Q||\mathcal{T}_{\max}|\log |\Sigma|)$ . In practice, failure trees may be large, or $s$ may be large, which could result in our algorithms performing worse than the naive approaches. $^{16}$ To see this, consider higher-order CRFs with backoff, a useful formalism for sequence tagging in NLP (Vieira et al., 2016), which can be encoded as WFSAs. They were the initial motivation for our proposed algorithms. Although these backoff CRFs do admit a compatible topological order that allows us to avoid the $|\mathcal{T}_{\max}|$ factor ( $\S 6.2$ ), we inspect them as an example of how large $|\mathcal{T}_{\max}|$ can be. + +An order- $n$ CRF tagging a sequence of length $\ell$ can be represented with a WFSA- $\phi$ in form of a lattice of $\ell$ layers. The layers include tag sequences of length $\leq n$ , meaning that, given a set of tags $\Sigma$ , each layer contains states representing histories $h \in \{\epsilon\} \cup \Sigma \cup \dots \cup \Sigma^n$ . This results in $\mathcal{O}(|\Sigma|^n)$ states per layer. Backoff transitions in such models encode transitions to lower-order histories (transitioning from a history of length $k$ to one of length $k - 1$ ) whenever a transition to a history of the same order is not possible. It is easy to see that each history of order $k$ could have up to $\Sigma$ incoming $\phi$ -transitions, connecting it to a large failure tree, which is exponential in size w.r.t. $n$ . + +# Ethics Statement + +We are not aware of any specific social risks created or exacerbated by this work. + +# Acknowledgements + +We would like to thank Alexandra Butoi for her valuable comments. + +# References + +Alfred V. Aho and Margaret J. Corasick. 1975. Efficient string matching: An aid to bibliographic search. Communications of the Association for Computing Machinery, 18(6). +Alfred V. Aho, John E. Hopcroft, and Jeffrey D. Ullman. 1974. The Design and Analysis of Computer Algorithms. Addison-Wesley. +Cyril Allauzen, Mehryar Mohri, and Brian Roark. 2003. Generalized algorithms for constructing statistical language models. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. +Uri Alon, Frank Xu, Junxian He, Sudipta Sengupta, Dan Roth, and Graham Neubig. 2022. Neuro-symbolic language modeling with automaton-augmented retrieval. In Proceedings of the International Conference on Machine Learning. +Peter F. Brown, Vincent J. Della Pietra, Peter V. deSouza, Jenifer C. Lai, and Robert L. Mercer. 1992. Class-based $n$ -gram models of natural language. Computational Linguistics, 18(4). +David Chiang and Peter Cholak. 2022. Overcoming a theoretical limitation of self-attention. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. +Peter M. Fenwick. 1994. A new data structure for cumulative frequency tables. Software: Practice and Experience, 24(3). +Michael Hahn. 2020. Theoretical limitations of self-attention in neural sequence models. Transactions of the Association for Computational Linguistics, 8. +Jan Hajic and Barbora Hladka. 1998. Tagging inflective languages: Prediction of morphological categories for a rich structured tagset. In Proceedings of the Annual Meeting of the Association for Computational Linguistics and International Conference on Computational Linguistics. +John Hewitt, Michael Hahn, Surya Ganguli, Percy Liang, and Christopher D. Manning. 2020. RNNs can generate bounded hierarchical languages with optimal memory. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. +Chengyue Jiang, Zijian Jin, and Kewei Tu. 2021. Neuralizing regular expressions for slot filling. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. +Arthur B. Kahn. 1962. Topological sorting of large networks. Communications of the ACM, 5(11):558-562. + +John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the International Conference on Machine Learning, pages 282-289. + +Daniel J. Lehmann. 1977. Algebraic structures for transitive closure. Theoretical Computer Science, 4(1). + +Chu-Cheng Lin, Hao Zhu, Matthew R. Gormley, and Jason Eisner. 2019. Neural finite-state transducers: Beyond rational relations. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. + +Andrew McCallum, Dayne Freitag, and Fernando C. N. Pereira. 2000. Maximum entropy Markov models for information extraction and segmentation. In Proceedings of the International Conference on Machine Learning, pages 591-598. + +Mehryar Mohri. 2002a. Generic $\varepsilon$ -removal and input $\varepsilon$ -normalization algorithms for weighted transducers. International Journal of Foundations of Computer Science, 13(1):129-143. + +Mehryar Mohri. 2002b. Semiring frameworks and algorithms for shortest-distance problems. Journal of Automata, Languages and Combinatorics, 7(3). + +Hao Peng, Roy Schwartz, Sam Thomson, and Noah A. Smith. 2018. Rational recurrences. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. + +Pushpendre Rastogi, Ryan Cotterell, and Jason Eisner. 2016. Weighting finite-state transductions with neural context. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. + +Shruti Rijhwani, Daisy Rosenblum, Antonios Anastasopoulos, and Graham Neubig. 2021. Lexically aware semi-supervised learning for OCR post-correction. Transactions of the Association for Computational Linguistics, 9. + +Sunita Sarawagi and William W. Cohen. 2004. Semi-Markov conditional random fields for information extraction. In Advances in Neural Information Processing Systems, volume 17. + +Roy Schwartz, Sam Thomson, and Noah A. Smith. 2018. Bridging CNNs, RNNs, and weighted finite-state machines. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. + +Yikang Shen, Shawn Tan, Alessandro Sordoni, and Aaron Courville. 2019. Ordered neurons: Integrating tree structures into recurrent neural networks. In International Conference on Learning Representations. + +![](images/6219a6ae54e8c4632c0e56025d0b33179b53f6d1da1aa399345801d396881738.jpg) +(a) A fragment inside of a WFSA- $\phi$ graph. + +![](images/0a31db76898885c19a58d88c6957a506ed012f5ad0598030af7cd10c6298946b.jpg) +(b) Failure-expanded version of the fragment from Fig. 3a. +Figure 3: WFSA- $\phi$ and WFSA examples discussed in App. A. + +Tim Vieira, Ryan Cotterell, and Jason Eisner. 2016. Speed-accuracy tradeoffs in tagging with variable-order CRFs and structured sparsity. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. + +# A Algorithm Demonstrations + +Consider the WFSA- $\phi$ fragment in Fig. 3a and the version in Fig. 3b that is produced by failure expansion (Alg. 2). This section demonstrates how different algorithms we discuss operate to compute the value $\beta(q)$ . + +The normal backward algorithm (Alg. 1) on the failure-expanded version would compute $\beta(q)$ as + +$$ +\begin{array}{l} \beta (q) \leftarrow w _ {1} \otimes \beta (q _ {1}) \oplus w _ {2 a} \otimes \beta (q _ {2}) \\ \oplus w _ {2 b} \otimes \beta (q _ {2}) \oplus w _ {3} \otimes \beta (q _ {3}). \\ \end{array} +$$ + +The version that memoizes out-symbol sums (Alg. 3) would compute $\beta(q)$ as + +$$ +\begin{array}{l} \beta (q) \leftarrow w _ {1} \otimes \beta \left(q _ {1}\right) \oplus w _ {2 a} \otimes \beta \left(q _ {2}\right) \\ \oplus w _ {2 b} \otimes \beta (q _ {2}) \oplus \beta (q ^ {\phi}, c). \\ \end{array} +$$ + +Alg. 3 is equivalent to copying the entire $\beta(q^{\phi}, a)$ memo table from $q^{\phi}$ , modifying the values for $a \in \Sigma(q)$ , and summing. That is, the dictionary $\{c \mapsto w_3 \otimes \beta(q_3), b \mapsto w_4 \otimes \beta(q_4)\}$ would be passed back from $q^{\phi}$ to $q$ and updated there to $\{c \mapsto w_3 \otimes \beta(q_3), b \mapsto w_{2a} \otimes \beta(q_2) \oplus w_{2b} \otimes \beta(q_2), a \mapsto w_1 \otimes \beta(q_1)\}$ , and $\beta(q, \Sigma)$ would be found by summing the values in this dictionary. + +The subtraction-based algorithm (Alg. 4) would compute $\beta(q)$ as + +$$ +\begin{array}{l} \beta (q) \leftarrow w _ {1} \otimes \beta (q _ {1}) \oplus w _ {2 a} \otimes \beta (q _ {2}) \\ \oplus w _ {2 b} \otimes \beta (q _ {2}) \oplus \beta (q ^ {\phi}) \ominus \beta (q ^ {\phi}, \{a, b \}) \\ = w _ {1} \otimes \beta (q _ {1}) \oplus w _ {2 a} \otimes \beta (q _ {2}) \\ \oplus w _ {2 b} \otimes \beta (q _ {2}) \oplus \beta (q ^ {\phi}) \ominus \beta (q ^ {\phi}, \{b \}). \\ \end{array} +$$ + +Lastly, Alg. 6 would initialize an aggregator $\gamma$ at the failure tree root $q^{\phi}$ as $\{c\mapsto w_3\otimes \beta (q_3),b\mapsto$ $w_{4}\otimes \beta (q_{4})\}$ , and pass the aggregator back to $q$ . There, $\gamma$ would be updated via $\gamma .set(a,\ldots)$ and $\gamma .set(b,\ldots)$ to $\{c\mapsto w_3\otimes \beta (q_3),b\mapsto w_{2a}\otimes$ $\beta (q_{2})\oplus w_{2b}\otimes \beta (q_{2}),a\mapsto w_{1}\otimes \beta (q_{1})\}$ , causing $\gamma .value()$ to change. Then, $\beta (q)$ would be computed as $\beta (q) = \gamma .value()$ . Compare this to Alg. 3, which had to explicitly sum up all the values in this dictionary to compute $\beta (q)$ , since it did not use an aggregator data structure (App. C) to maintain partial sums over subsets of these values. + +# B Number of Transitions Added by Failure Expansion + +We show §3.1's claim that the number of transitions added by failure expansion (Alg. 2) is $\left( \overline{s} - s \right) |\Sigma| |Q|$ when the input WFSA- $\phi$ is deterministic. + +In the deterministic case, each out-symbol at a state labels exactly one outgoing transition. Hence the number of added transitions for a given state $q$ equals the number of added out-symbols, $|\overline{\Sigma}(q) \setminus \Sigma(q)| = |\overline{\Sigma}(q)| - |\Sigma(q)|$ , where $\overline{\Sigma}(q) \supseteq \Sigma(q)$ . Summing over all $q \in Q$ , and using definition 9, we get a total number of added transitions of + +$$ +\begin{array}{l} \sum_ {q \in Q} \bar {\Sigma} (q) - \sum_ {q \in Q} \Sigma (q) = \bar {s} | \Sigma | | Q | - s | \Sigma | | Q | \\ = (\bar {s} - s) | \Sigma | | Q |. \\ \end{array} +$$ + +In the general case where the input WFSA- $\phi$ may be non-deterministic, each added out-symbol may label anywhere from 1 to $|Q|$ added transitions. Thus the total number of added transitions is between $(\overline{s} -s)|\Sigma ||Q|$ and $(\overline{s} -s)|\Sigma ||Q|^2$ + +# C Aggregator Implementation + +A Fenwick tree (Fenwick, 1994) is a data structure that stores a sequence $v_{1}, \ldots, v_{N}$ and can efficiently return any prefix sum of the form $\oplus_{n=1}^{N'} v_{n}$ for $N' \in [0, N]$ , as well as allowing the individual elements $v_{n}$ to be updated. Each prefix-sum query or element update takes $\mathcal{O}(\log N)$ time. + +Our aggregator interface in §5 is simpler. It only queries the full sum $\oplus_{n=1}^{N} v_n$ (the case $N' = N$ ). Thus, the order of the elements is not considered by this interface. §6.4 noted that in the special + +![](images/04a4f6c15e0d4da5c007fe00fb357cf2d21fe126789dd76d9df29e7f11070ad8.jpg) +Figure 4: A Fenwick tree computing $1 + 2 + 3 = 3 + 3 = 6$ . + +case where subtraction is available (and numerically stable), an aggregator can be implemented even more efficiently without a Fenwick tree, since then it is easy to update the sum in constant time when updating any element. However, subtraction is not guaranteed to be available for arbitrary $\oplus$ operations (e.g., $\oplus = \max$ ). + +A Fenwick tree stores the elements $v_{n}$ at the leaves of a balanced binary tree. Each internal (nonleaf) node stores the $\oplus$ -sum of the values stored at its children. As a result, thanks to the associativity of $\oplus$ , the root of the tree contains the full sum $\bigoplus_{n=1}^{N} v_{n}$ , which can be looked up in $\mathcal{O}(1)$ time. An example of a Fenwick tree (in the real semiring) is presented in Fig. 4. Note that we draw the root of a Fenwick tree at the top and consider it to be the ancestor of all other nodes, whereas failure trees had the root as the descendant of all other states. + +Initial creation of the Fenwick tree takes only $\mathcal{O}(N)$ total time by visiting all nodes in bottom-up order and setting each non-leaf node to the $\oplus$ -sum of its children. When a leaf $v_{n}$ is updated, just its ancestors are recomputed, again in bottom-up order. As there are about $\log N$ ancestors, this update takes $\mathcal{O}(\log N)$ total time. + +Our aggregator is a Fenwick tree that stores $N = |\Sigma|$ elements, where $v_{n}$ is the value associated with the $n^{\text{th}}$ element of $\Sigma$ . (That is, we identify the possible keys $a \in \Sigma$ with the integers $[1, N]$ .) Initially, $v_{n} = 0$ , but may be changed by set. Each call to set takes $\mathcal{O}(\log |\Sigma|)$ time; this factor appears in our runtime analysis. To achieve our runtime bounds for sparse WFSAs, we must take care not to spend $\mathcal{O}(|\Sigma|)$ time initializing all of the leaves and internal nodes to $\mathbf{0}$ every time we create an aggregator. Array initialization overhead can always be avoided, using a method from computer science folklore (Aho et al., 1974, exercise 2.12). + +Alternatively, we can store values in the Fenwick tree only for those keys for which values have been set. Under this design, the operation set $(a:\Sigma ,v:\mathbb{K})$ must update $v_{n}\gets v$ where $n$ is the integer index associated with key $a$ . To find $n$ , the aggregator maintains a hash table that maps + +keys to consecutive integers. We assume $\mathcal{O}(1)$ -time hash operations. The first key that is set is mapped to 1, the second is mapped to 2, etc. When a key $a$ is set for the first time—that is, when it is not found in the hash table— $N$ is incremented, the mapping $a \mapsto N$ is added to the hash table, and $v_{N} = v$ is appended to the Fenwick sequence. The hash table is also consulted by the get operation. + +In our application, for an aggregator that represents state $q$ , the keys that have been set are $\overline{\Sigma}(q)$ . The design in the previous paragraph therefore reduces $N$ from $|\Sigma|$ to $N = |\overline{\Sigma}(q)|$ . As a result, the factor $\mathcal{O}(\log |\Sigma|)$ in our analysis could actually be reduced to $\mathcal{O}(\log \max_{q \in Q} |\overline{\Sigma}(q)|)$ . + +Note that to obtain this runtime reduction, the undo method must properly undo the changes not only to the Fenwick tree but to the integerizing hash table (see footnote 7). If a call to set in Visit incremented $N$ and added $a \mapsto N$ , then the call to undo in Leave must remove $a \mapsto N$ and decrement $N$ , thereby keeping $N$ small as desired. + +# D Weighted $\phi$ -Transitions + +Throughout the main paper, we assumed that all $\phi$ -transitions have a weight of 1. This simplifying assumption is typically violated by backoff models (e.g., Allauzen et al., 2003). Fortunately, it can be removed with relatively small changes to our equations, algorithms and data structures. + +Most simply, a weighted failure transition $q \xrightarrow{\phi / w^{\phi}} q^{\phi}$ could be simulated by a path $q \xrightarrow{\phi / 1} q^{\varepsilon} \xrightarrow{\varepsilon / w^{\phi}} q^{\phi}$ where $q^{\varepsilon}$ is a newly introduced intermediate state with only an $\varepsilon$ -transition. We would then have to eliminate the $\varepsilon$ -transition as mentioned in §2. In this case, this simply means replacing $q^{\varepsilon} \xrightarrow{\varepsilon / w^{\phi}} q^{\phi}$ in $E$ with transitions $\{q^{\varepsilon} \xrightarrow{a / w^{\phi} \otimes w} q' : q^{\phi} \xrightarrow{a / w} q' \in E\}$ . However, this may be expensive when the original fallback state $q^{\phi}$ has many outgoing transitions, which is typical in a backoff setting. Copying all of those transitions to a parent as in Alg. 2 (failure expansion) is exactly what the new methods in this paper are designed to avoid. We therefore give direct modifications to our constructions. + +Suppose the failure transition for state $q$ has weight $w^{\phi}$ -that is, $E$ contains $q\xrightarrow{\phi / w^{\phi}} q^{\phi}$ where perhaps $w^{\phi} \neq 1$ . Then the second case of Eq. (9) should be modified to set + +$$ +\beta (q, a) = w ^ {\phi} \otimes \beta (q ^ {\phi}, a) +$$ + +for any $a \notin \Sigma(q)$ . Similarly, $w^{\phi}$ should be incorporated into Eq. (13), which becomes + +$$ +\beta (q, \Sigma \setminus \Sigma (q)) = w ^ {\phi} \otimes \bigoplus_ {b \in \Sigma \setminus \Sigma (q)} \beta (q ^ {\phi}, b) +$$ + +Finally, the subtraction expression in the right-hand side of Eq. (14) must be left-multipplied by $w^{\phi}$ . + +In Alg. 2, which constructs the failure-expanded edge set, the update at state $q$ becomes + +$$ +\bar {E} \leftarrow \bar {E} \cup \left\{q \xrightarrow {a / w ^ {\phi} \otimes w} q ^ {\prime} \mid q ^ {\phi} \xrightarrow {a / w} q ^ {\prime} \in \bar {E}, a \notin \Sigma (q) \right\} +$$ + +Algs. 3 and 4 undergo straightforward modifications based on the modified Eqs. (9), (13) and (14). When $\beta(q^{\phi}, b)$ is copied backwards over a $\phi$ -transition $q \xrightarrow{\phi / w^{\phi}} q^{\phi}$ , it must be left-muicipplied by $w^{\phi}$ to yield $\beta(q, b)$ . This affects Alg. 3 lines 8-9 and Alg. 4 line 12, as well as the purple terms in Alg. 4 line 9. These modifications do not affect the asymptotic runtime complexity. + +Alg. 6 requires more modification. We must extend our aggregator class (§5) with a new method that left-multipplies all elements by a constant:17 + +1: class Aggregator(): $\triangleright$ We use $\gamma$ to refer to an aggregator instance + $\vdots$ +6: def mult(m: K) $\triangleright$ $\forall a \in \Sigma$ , updates $\gamma(a) \gets m \otimes \gamma(a)$ + +In Alg. 5, $\operatorname{Visit}(\gamma, q)$ should begin by calling $\operatorname{mult}(w^{\phi})$ where $w^{\phi}$ is the weight of the failure arc from $q$ . Consequently, $\operatorname{Leave}(\gamma, q)$ should be modified to undo one more update than before. + +How to implement the mult method efficiently? + +With both subtraction and division The subtraction-based aggregator (§6.4) can be modified to still support all operations in $\mathcal{O}(1)$ time, provided that the ring $\mathbb{K}$ is actually a division ring (noncommutative field), i.e., it supports division by non-0 multipliers. The aggregator maintains an overall multiplier $M$ , initially 1, and the call $\text{mult}(m)$ replaces $M$ with $m \otimes M$ ; thus, $M$ is a product of the $I$ multipliers applied far, $m_I \otimes \dots \otimes m_1$ . As in App. C, we identify each key with an integer index $n$ . If $a$ has index $n$ , then set $(a, v)$ stores $M^{-1} \otimes v$ into $v_n$ . Later get $(a)$ can return $M \otimes v_n$ ; since $M$ + +has been updated in the meantime, this yields the originally set value $v$ left-multiplied by all subsequent multipliers, since $M$ has been updated. The aggregator also maintains the total $\bigoplus_{n=1}^{N} v_n$ as $v_n$ values are set or replaced (using subtraction), and the value method returns $M \otimes \bigoplus_{n=1}^{N} v_n$ . + +With subtraction only If $\mathbb{K}$ does not support division, then the subtraction-based aggregator can be rescued as follows. The aggregator maintains the number $I$ of multipliers applied so far, as well as their product $M$ as before. The function set $(a,v)$ now stores $v$ into $v_{n}$ and $I$ into $i_n$ , and later get $(a)$ returns $M_{i_n} \otimes v_n$ , where in general $M_i$ is defined to be the product of multipliers subsequent to $m_i$ , that is, $M_i = M_I \otimes \dots \otimes m_{i+1}$ . The aggregator maintains the current total $S$ that should be returned by lookup; the mult $(m)$ method left-multipplies this total by $m$ , while the method set $(a,v)$ modifies this total by adding $v \ominus \operatorname{get}(a)$ before it updates $(v_n,i_n)$ . The difficulty is now in obtaining the partial products $M_i$ without division. This can be done by maintaining $m_1, \ldots, m_I$ in a Fenwick tree. This means that mult and get now take time $\mathcal{O}(\log I)$ rather than $\mathcal{O}(1)$ . The effect on §6.4's ring-based version of Alg. 6 is to add $\sum_{q' \in Q} \operatorname{ancs}(q') \log |\pi_{\max}| \leq |Q| |\mathcal{T}_{\max}| \log |\pi_{\max}|$ to the asymptotic runtime expression. This is the same cost as if every state had $\log |\pi_{\max}|$ additional outgoing symbols. $|\pi_{\max}|$ is usually very small. + +Without subtraction For this case, we stored the summands in a Fenwick tree (App. C). Fortunately, it is possible to extend that data structure to support mult in time $\mathcal{O}(N)$ , where $N$ is the number of elements, without affecting the asymptotic runtime of set, value, or undo. The asymptotic runtimes of Algs. 5-6 will remain unchanged. + +In our modified Fenwick tree, the $N$ leaves store unscaled values $u_{1},\ldots ,u_{N}\in \mathbb{K}$ . Each node $j$ (leaf or internal node) stores a multiplier $m_j$ that will be lazily applied to all of the leaves that are descendants of $j$ . Thus, the scaled value $v_{n}$ is found as the product $m_r\otimes m_{j_1}\otimes m_{j_2}\otimes \dots \otimes m_n\otimes u_n$ , where $r,j_{1},\ldots n$ is the path from the root $r$ to the leaf $n$ . Thus, the leaves store the elements $v_{n}$ directly (as they would in an ordinary Fenwick tree) only + +in the special case where all the multipliers are 1. In general $v_{n}$ must be computed on demand. The runtime of get is now $\mathcal{O}(\log N)$ rather than $\mathcal{O}(1)$ , but Algs. 5-6 never actually use the get method. + +The new call $\mathrm{mult}(m)$ simply replaces $m_r \gets m \otimes m_r$ , which affects all $v_n$ in $\mathcal{O}(1)$ total time. + +To support fast computation of the total value $\oplus_{n = 1}^{N}v_{N}$ , we also store partial sums at the nodes, as before. Thus, each node $j$ stores a pair $(m_j,u_j)$ . The scaled value of node $j$ is defined to be $m_j\otimes u_j$ . When $j$ is an internal node, we ensure as an invariant that $u_{j}$ is the sum of the scaled values of $j$ 's children, updating it whenever $j$ 's children change. The value method simply returns the scaled value of the root in $\mathcal{O}(1)$ time. + +The interesting modification is to the set method. To set $v_{n}$ to $v$ , leaf $n$ is modified to set $(m_{n}, u_{n}) \gets (1, v)$ —but also, all of $n$ 's ancestors $j$ must be modified to have multipliers $m_{j} = 1$ , so that $v_{n} = 1 \otimes \dots \otimes 1 \otimes v = v$ as desired. Before being set to $1$ , each old multiplier $m_{j}$ is "pushed down" to its children so that it still affects all leaves of $j$ . The method descends from the root $r$ to leaf $n$ : it pushes the $m_{j}$ values out of the way on the way down, updates the leaf at the bottom, and restores the invariant by recomputing the $u_{j}$ values on the way back up as it returns. + +1: def set(a: $\Sigma$ , v: $\mathbb{K}$ ): +2: $n \gets$ leaf that stores value of key $a$ +3: set_desc(n, v, r) $\triangleright$ $r$ is the root of the Fenwick tree +4: def set_desc(n: leaf, v: K, j: node): +5: $\triangleright j$ is an ancestor of $n$ ; $j$ 's own proper ancestors have multiplier 1; so will $j$ upon return +6: if $j$ is a leaf: $(m_j, u_j) \gets (1, v)$ Since $j = n$ +7: else +8: for $k \in \text{children}(j) : m_k \gets m_j \otimes m_k$ +9: $m_j\gets 1$ $\triangleright m_{j}$ has been pushed down +10: set_desc(n, v, child of $j$ that is anc. of $n$ ) +11: $u_{j}\gets \bigoplus_{k\in \mathsf{children}(j)}m_{k}\otimes u_{k}\triangleright$ Restore invariant at j + +The else clause in set_desc can be rephrased (less readily) to avoid looping twice over children $(j)$ : + +8: $k\gets$ the child of $j$ that is an ancestor of $n$ +9: $m_{k}\gets m_{j}\otimes m_{k}$ push $m_j$ down to k +10: set_desc(n,v,k) +11: $u_{j}\gets u_{k}$ $\triangleright = m_{k}\otimes u_{k},$ since now $m_{k} = 1$ +12: for $k^{\prime}\in \mathrm{siblings}(k): \triangleright$ in a binary tree, there will be $\leq 1$ +13: $m_{k^{\prime}}\gets m_{j}\otimes m_{k^{\prime}}$ $\triangleright_{push}m_{j}$ down to $k^{\prime}$ +14: $u_{j}\oplus = m_{k^{\prime}}\otimes u_{k^{\prime}}$ +15: $m_j \gets 1 \triangleright_{m_j}$ has been pushed down and invariant restored at $j$ + +# E Tree Splitting Details + +Alg. 6 is applicable to any acyclic semiring-weighted WFSA- $\phi$ . However, updating an Aggregator as it travels within a failure tree incurs an additional worst-case multiplicative runtime factor of $|\mathcal{T}_{\mathrm{max}}|$ , the size of the biggest failure tree. This section outlines an improvement by lessening this impact. We do so by splitting large failure trees into multiple smaller ones. + +Alg. 6 destructively updates an aggregator $\gamma$ when Visiting a state $q$ from $q^{\phi}$ . This takes time $\mathcal{O}(|\Sigma(q)| \log |\Sigma|)$ . In contrast, Alg. 3 can be thought of as non-destructively copying $\gamma$ to $q$ from $q^{\phi}$ , which means the work can be saved and does not have to be redone if $q$ is re-Visited later. + +This inspires us to hybridize Alg. 6 as follows: $\mathrm{Visit}(\gamma, q)$ in Alg. 5 may optionally copy-and-update $\gamma$ rather than just updating it. Copying effectively cuts the transition $q \xrightarrow{\phi / w} q^{\phi}$ , making the sub-tree rooted at $q$ a new independent failure tree with its own Aggregator instance. Copy-and-update does incur a one-time cost of $\mathcal{O}(|\Sigma|)$ , but now Alg. 6 line 3 will select a smaller failure tree. + +However, at what states (if any) should we split each failure tree? The optimal set of splits depends on the topological order (§6.2) used by Alg. 6. + +Dynamic splitting heuristics A simple greedy heuristic would be to split at $q$ upon any call $\operatorname{Visit}(q)$ where copy-and-update is estimated to be cheaper than destructive updating, based on the current size of the aggregator and the number of required updates $|\Sigma(q)|$ . However, this does not consider the future benefit of having smaller failure trees, and it does not adapt to the topological order. + +A more sophisticated dynamic heuristic is for $\mathrm{Visit}(q)$ to split at $q$ if not doing so would cause the total time spent so far on all $\mathrm{Visit}(q)$ calls[22] to exceed the time that it would take to copy-and-update at $q$ . (Put another way, it does so if it now realizes in retrospect that it would have been better for the very first $\mathrm{Visit}(q)$ call to have invested in copy-and-update.) This ensures that our enhanced Alg. 6 will take at most twice as long as Alg. 3, + +which always does copy-and-update. It eventually splits any state that is Visited often enough by the chosen topological order, especially if that state is expensive to Visit. On the other hand, if the chosen topological order is compatible so that every state is Visited only once, it will still achieve or outperform the best-case behavior of the original Alg. 6. + +Static splitting algorithms We may also consider static methods, which do not adapt to the topological order that is actually used, but optimize to mitigate the worst case. In the runtime analysis of §6.1, the failure tree $\mathcal{T}$ contributes $\mathcal{O}(f(\mathcal{T})\log |\Sigma |)$ to the failure term in the worst-case runtime of Alg. 6,[23] where $f(\mathcal{T})\stackrel {\mathrm{def}}{=}\sum_{q\in \mathcal{T}}|\Sigma (q)|\mathrm{ancs}(q)$ .[24] We may seek a split that is optimal with respect to this runtime bound. Let $q_{1}$ be the root of $\mathcal{T}$ . Suppose we choose to copy-and-update the aggregator when we first visit each of $q_{2},\ldots ,q_{K}\in \mathcal{T}$ , essentially cutting off each state $q_{k}$ from its fallback state $q_{k}^{\phi}$ . (Here all of the $q_{k}$ are to be distinct.) This splits $\mathcal{T}$ into trees $\mathcal{T}_1,\dots ,\mathcal{T}_K$ , where each $\mathcal{T}_k$ is rooted at $q_{k}$ . Then the contribution of these $K$ trees to the asymptotic runtime upper bound is proportional to $(K - 1)|\Sigma | + \sum_{k = 1}^{K}f(\mathcal{T}_k)\log |\Sigma |$ , where the first term covers the cost of the $K - 1$ copy-and-update operations, and where the factor $\mathrm{ancs}(q)$ in the definition of $f(\mathcal{T}_k)$ considers only the ancestors of $q$ within $\mathcal{T}_k$ . Our goal is to choose $K\geq 1$ and $q_{2},\ldots ,q_{K}$ to minimize this expression. + +We first remark that requiring $K \leq 2$ makes it easy to solve the problem in time $\mathcal{O}(|\mathcal{T}|)$ time, assuming that we already know $|\Sigma(q)|$ for each $q \in \mathcal{T}$ . Define $D_q = \sum_{q' > q} |\Sigma(q')|$ , the total number of out-symbols at proper descendants of $q$ . The improvement $f(T) - (f(T_1) + f(T_2))$ from splitting $\mathcal{T}$ at $q_2$ is simply $D_{q_2} \mathrm{ancs}(q_2)$ . Intuitively, there are $D_{q_2}$ out-symbols that can no longer be encountered when $\mathrm{Visit}^+$ is called on any of the $\mathrm{ancs}(q_2)$ states in $\mathcal{T}_2$ .[25] This yields an improvement of $-|\Sigma| + D_{q_2} \mathrm{ancs}(q_2) \log |\Sigma|$ in the runtime bound. A simple recursion from the root $q_1$ is enough to find $D_q$ and $\mathrm{ancs}(q)$ at every state $q$ , and thus find the state $q \neq q_1$ that achieves the best improvement in the runtime bound when chosen as $q_2$ . If no choice achieves a positive improvement, + +then we do not split the tree and leave $K = 1$ . + +We now present an exact algorithm for the full problem, with no bound on $K$ . Roughly speaking, after we split at a state $q'$ (making it the root of its own failure tree), we will also consider splitting again at its ancestors $q$ , but we do not make these decisions greedily—we use dynamic programming. The main observation is that if $q$ is currently in a failure tree with root $q' > q$ (where either $q' = q_1$ or we previously split at $q'$ ), then splitting at $q$ will give a further improvement of $-|\Sigma| + (D_q - D_{q'})\mathrm{ancs}(q)\log |\Sigma|$ . Denote this quantity by $\Delta_{q|q'}$ . We now wish to find the set of states $S = \{q_2,\dots,q_K\} \subseteq \mathcal{T}\setminus \{q_1\}$ that maximizes $\sum_{k = 2}^{K}\Delta_{q_k|q_k'}$ , where $q_k'$ is the highest state in $\{q_1,\dots,q_K\}$ that is a proper descendant of $q_k$ (that is, $q_k' > q_k$ ). This sum is the total improvement obtained by splitting at all of $\{q_2,\dots,q_K\}$ , since it is the total that would be obtained by splitting them successively in any reverse topological order. + +For each state $q \in \mathcal{T}$ and each $q' > q$ , define + +$$ +\bar {\Delta} _ {q \mid q ^ {\prime}} = \max \left(\check {\Delta} _ {q \mid q ^ {\prime}}, \hat {\Delta} _ {q \mid q ^ {\prime}}\right) \tag {15} +$$ + +$$ +\check {\Delta} _ {q \mid q ^ {\prime}} = \left(\sum_ {p} \bar {\Delta} _ {p \mid q}\right) + \Delta_ {q \mid q ^ {\prime}} \tag {16} +$$ + +$$ +\hat {\Delta} _ {q \mid q ^ {\prime}} = \left(\sum_ {p} \bar {\Delta} _ {p \mid q ^ {\prime}}\right) + 0 \tag {17} +$$ + +where $p$ in the summations ranges over the parents of $q$ (if any) in the failure tree. Here $\bar{\Delta}_{q|q'} \geq 0$ is the maximum total improvement that can be obtained by splitting a failure tree rooted at $q' > q$ at any set of states $\preceq q$ ; $\check{\Delta}_{q|q'}$ is the maximum if this set includes $q$ , and $\hat{\Delta}_{q|q'}$ is the maximum if this set does not include $q$ . The optimal split of $\mathcal{T}$ then has total improvement $\sum_{p} \bar{\Delta}_{p|q_1}$ where $q_1$ is the root of $\mathcal{T}$ and $p$ ranges over its parents. + +Tracing back through the derivation of this optimal improvement, one may determine which states were split to obtain it. This is similar to following backpointers in the Viterbi algorithm. Concretely, define + +$$ +\bar {S} _ {q \mid q ^ {\prime}} = \left\{ \begin{array}{l l} \hat {S} _ {q \mid q ^ {\prime}} & \text {i f} \bar {\Delta} _ {q \mid q ^ {\prime}} = \hat {\Delta} _ {q \mid q ^ {\prime}} \\ \check {S} _ {q \mid q ^ {\prime}} & \text {o t h e r w i s e} \end{array} \right. \tag {18} +$$ + +$$ +\check {S} _ {q \mid q ^ {\prime}} = \left(\bigcup_ {p} \bar {S} _ {p \mid q}\right) \cup \{q \} \tag {19} +$$ + +$$ +\hat {S} _ {q \mid q ^ {\prime}} = \left(\bigcup_ {p} \bar {S} _ {p \mid q ^ {\prime}}\right) \cup \varnothing \tag {20} +$$ + +For example, $\bar{S}_{q|q'} \geq 0$ is the optimal set of states + +$\preceq q$ to split in a failure tree rooted at $q^{\prime} > q$ . The optimal set of split points in $\mathcal{T}$ , not counting the original root $q_{1}$ , is $S = \bigcup_{p}\bar{S}_{p|q_{1}}$ where, again, $p$ ranges over the parents of $q_{1}$ . Any of the unions written here can be enumerated by (recursively) enumerating the disjoint sets that are unioned together, without any copying to materialize the sets. + +Concretely, we first work from the leaves down to the root: at each $q$ , we compute and memoize all of the $\Delta$ quantities (quantities (15)-(17) for all $q' > q$ ), after first having done so at the parents of $q$ . We then enumerate $S$ using the definitions (18)-(20), which recurse from the root back up to the leaves. Thanks to the choice at Eq. (18) based on the $\Delta$ quantities, this recursion considers $\bar{S}_{q|q'}$ only for the $q' > q$ pairs such that $q'$ is the highest proper descendant of $q$ in the optimal set $\{q_1\} \cup S$ . + +The total runtime is dominated by (15)-(17) and is proportional to the number of $q' > q$ pairs in $\mathcal{T}$ . Summed over all trees $\mathcal{T}$ , this is just the total height of all states in all failure trees, or equivalently $\sum_{q' \in Q} (\mathrm{ancs}(q') - 1)$ . This resembles the failure term in the worst-case runtime of Alg. 6, but is much faster since it eliminates all factors that depend on $|\Sigma|$ . Thus, when a compatible order is not known (§6.2), taking the time to optimally split the failure trees may be worth the investment. + +Runtime analysis after static splitting To get a sense of how this improves the worst-case runtime, consider an idealized WFSA- $\phi$ where every state $q$ has the same number of out-symbols, $\Sigma (q) = s|\Sigma |$ Furthermore, relax the runtime bound by replacing $\mathrm{ancs}(q)$ in the definition of $f(\mathcal{T})$ by the larger value $|\mathcal{T}|$ , so $f(\mathcal{T})\stackrel {\mathrm{def}}{=}s|\Sigma ||\mathcal{T}|^2$ + +This means when we split $\mathcal{T}$ into $\mathcal{T}_1,\ldots ,\mathcal{T}_K$ our earlier runtime expression $(K - 1)|\Sigma | + \sum_{k = 1}^{K}f(\mathcal{T}_k)\log |\Sigma |$ becomes $(K - 1)|\Sigma | + \sum_{k = 1}^{K}s|\Sigma ||\mathcal{T}_k|^2\log |\Sigma |,$ or more simply, $|\Sigma |(K - 1 + (s\log |\Sigma |)\sum_{k = 1}^{K}|\mathcal{T}_k|^2)$ .For a given $K$ , this is minimized when all $K$ trees have equal size $\frac{|\mathcal{T}|}{K}$ yielding a minimum of + +$$ +| \Sigma | \left(K - 1 + \frac {s \log | \Sigma |}{K} | \mathcal {T} | ^ {2}\right) \tag {21} +$$ + +Setting the derivative with respect to $K$ to zero, we find that the optimal $K = |\mathcal{T}|\sqrt{s\log|\Sigma|}$ + +However, for a WFSA with sufficiently dense out-symbols, namely one with $s > \frac{1}{\log|\Sigma|}$ , this asks to take $K > |\mathcal{T}|$ , which is impossible. There the method will have to settle for $K = |\mathcal{T}|$ , splitting each state into its own failure tree. This makes + +Alg. 6 reduce to Alg. 3. + +Conversely, for a WFSA with sufficiently sparse out-symbols, namely one with $s < \frac{1}{|\mathcal{T}_{\max}|^2\log|\Sigma|}$ , the above formula asks to take $K < 1$ for all failure trees. That is also impossible: the method will have to settle for $K = 1$ , not splitting $\mathcal{T}$ at all. This is the original version of Alg. 6. + +In between these two extremes, we can take $K \approx |\mathcal{T}|\sqrt{s\log|\Sigma|}$ as proposed above. This makes the bound (21) on the contribution of failure tree $\mathcal{T}$ to the runtime become $\mathcal{O}(|\Sigma||\mathcal{T}|\sqrt{s\log|\Sigma|})$ . Note that the $\sqrt{}$ term is $< 1$ because we are not too dense, so this may beat Alg. 3. It also beats the original Alg. 6: if we did not split the tree but kept $K = 1$ , the expression would give $\mathcal{O}(|\Sigma||\mathcal{T}|^2s\log |\Sigma|)$ . In short, splitting the tree avoids the quadratic worst-case cost of Alg. 6. To put it another way, by eliminating the worst-case interaction among the $K$ trees, we have reduced from $\mathcal{O}(|\Sigma|K^2)$ to $\mathcal{O}(|\Sigma|K)$ . Recall that $K \geq 1$ since we are not too sparse, so this is again an improvement. + +# F Example of a Non-Suitable WFSA- $\phi$ + +![](images/6119f869f983d0eab164c534f005dddecf39f3503b3cf624885c1e1543d551a5.jpg) +Figure 5: Example of a WFSA- $\phi$ where $E\cup E^{\phi}$ is not acyclic, yet its failure expanded transition set $\overline{E}$ is. \ No newline at end of file diff --git a/algorithmsforacyclicweightedfinitestateautomatawithfailurearcs/images.zip b/algorithmsforacyclicweightedfinitestateautomatawithfailurearcs/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..295f2f1d8eda129d883b4a633406e10c24dd626b --- /dev/null +++ b/algorithmsforacyclicweightedfinitestateautomatawithfailurearcs/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:505e4753c0862fa3df836c34e359f5e8483516ae79024dda5a342718b572858a +size 330695 diff --git a/algorithmsforacyclicweightedfinitestateautomatawithfailurearcs/layout.json b/algorithmsforacyclicweightedfinitestateautomatawithfailurearcs/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..7b7989ef3f9d510b3aaf1142d55ffaa2b838840c --- /dev/null +++ b/algorithmsforacyclicweightedfinitestateautomatawithfailurearcs/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e93d505da180d0c12a3306334b09e52f2f0a49fce1e8ae4289e18e3909bef819 +size 1492759 diff --git a/algorithmsforweightedpushdownautomata/0d5d6e07-621f-4875-b45d-934cc0ea5b89_content_list.json b/algorithmsforweightedpushdownautomata/0d5d6e07-621f-4875-b45d-934cc0ea5b89_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..7d33b802aec6099ef2e3513bf0046c83c73682e5 --- /dev/null +++ b/algorithmsforweightedpushdownautomata/0d5d6e07-621f-4875-b45d-934cc0ea5b89_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f024c46c51613307fc0b3471c9362a69a733c5c439cbcf80ce5843cb0c26b3c1 +size 106930 diff --git a/algorithmsforweightedpushdownautomata/0d5d6e07-621f-4875-b45d-934cc0ea5b89_model.json b/algorithmsforweightedpushdownautomata/0d5d6e07-621f-4875-b45d-934cc0ea5b89_model.json new file mode 100644 index 0000000000000000000000000000000000000000..97aa08123df9db4b1b76129b31fe6cb1fb790fc1 --- /dev/null +++ b/algorithmsforweightedpushdownautomata/0d5d6e07-621f-4875-b45d-934cc0ea5b89_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:64c0507cb90d9edcb865a7dc43405730ea73ef92b4c3e75c9c5e970e832ad171 +size 128221 diff --git a/algorithmsforweightedpushdownautomata/0d5d6e07-621f-4875-b45d-934cc0ea5b89_origin.pdf b/algorithmsforweightedpushdownautomata/0d5d6e07-621f-4875-b45d-934cc0ea5b89_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..85d77f121dfdbb7bbf6499b31d433f67637fcde2 --- /dev/null +++ b/algorithmsforweightedpushdownautomata/0d5d6e07-621f-4875-b45d-934cc0ea5b89_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4c68c4a464d187506920465abe613a63a204012250924c718c6db10b3a0ba5d0 +size 382205 diff --git a/algorithmsforweightedpushdownautomata/full.md b/algorithmsforweightedpushdownautomata/full.md new file mode 100644 index 0000000000000000000000000000000000000000..59845908f4017d733b68683e46b41e7f056dcc6b --- /dev/null +++ b/algorithmsforweightedpushdownautomata/full.md @@ -0,0 +1,677 @@ +# Algorithms for Weighted Pushdown Automata + +Alexandra Butoi $^{1}$ Brian DuSell $^{2}$ Tim Vieira $^{3}$ Ryan Cotterell $^{1}$ David Chiang $^{2}$ + +$^{1}$ ETH Zürich $^{2}$ University of Notre Dame $^{3}$ Johns Hopkins University alexandra.butoi@inf.ethz.ch {bdusell1, dchiang}@nd.edu {tim.f.vieira, ryan.cotterell}@gmail.com + +# Abstract + +Weighted pushdown automata (WPDAs) are at the core of many natural language processing tasks, like syntax-based statistical machine translation and transition-based dependency parsing. As most existing dynamic programming algorithms are designed for context-free grammars (CFGs), algorithms for PDAs often resort to a PDA-to-CFG conversion. In this paper, we develop novel algorithms that operate directly on WPDAs. Our algorithms are inspired by Lang's algorithm, but use a more general definition of pushdown automaton and either reduce the space requirements by a factor of $|\Gamma|$ (the size of the stack alphabet) or reduce the runtime by a factor of more than $|Q|$ (the number of states). When run on the same class of PDAs as Lang's algorithm, our algorithm is both more space-efficient by a factor of $|\Gamma|$ and more time-efficient by a factor of $|Q| \cdot |\Gamma|$ . + +![](images/060ebe4abed21349f1a16217bd9c945b80d5b9029f68d04225bc88eb15f382af.jpg) + +https://github.com/rycolab/wpda + +# 1 Introduction + +Weighted pushdown automata (WPDAs) are widespread in natural language processing (NLP), primarily in syntactic analysis. For instance, WPDAs have found use in syntax-based statistical machine translation (Allauzen et al., 2014), and many transition-based dependency parsers (Nivre, 2004; Chen and Manning, 2014; Weiss et al., 2015; Dyer et al., 2015; Andor et al., 2016; Shi et al., 2017; Ma et al., 2018; Fernandez-Gonzalez and Gomez-Rodriguez, 2019) are special cases of WPDAs. In addition, PDAs have been used in computational psycholinguistics as models of human sentence processing (Resnik, 1992). Despite their ubiquity, there has been relatively little research on the theory of WPDAs themselves. In some ways, WPDAs are treated as second-class citizens compared to their equivalent cousins, weighted context-free grammars (WCFGs), for which a variety of dy + +![](images/1599287f60cbfe2c6d5316dfa69a2f1d3aeadfe25ebb483b24ea58dcc24491cd.jpg) +Figure 1: Roadmap of the paper. Solid lines are new results in this paper; dashed lines are old results. We are aware of two existing methods for PDA stringsums, via CFG and via Lang's algorithm; our algorithms are faster and/or more general than both. + +namic programming algorithms exist (Bar-Hillel et al., 1961; Earley, 1970; Stolcke, 1995). To help fill this gap, this paper offers several new and improved algorithms for computing with WPDAs. + +Figure 1 gives an overview of most of our results. We start by defining a weighted version of the extended PDAs of Aho and Ullman (1972, p. 173) and two special cases: the standard definition (Hopcroft et al., 2006), which we call top-down, and its mirror image, which we call bottom-up. Both top-down and bottom-up WPDAs have been used in NLP. Roark's (2001) generative parser is a top-down PDA as is Dyer et al.'s (2016). Most transition-based dependency parsers, both arc-standard (Nivre, 2004; Huang et al., 2009) and arc-eager (Nivre, 2003; Zhang and Clark, 2008), are bottom-up WPDAs. + +Next, we give a normal form for WPDAs analogous to Chomsky normal form, and we derive new dynamic programming algorithms to compute the weight of a string under top-down and bottom-up WPDAs in normal form. We are only aware of one + +previous recognition algorithm for PDAs, that of Lang (1974), which we generalize to the weighted case and improve in the following ways: + +- On PDAs more general than those Lang considers, our algorithm is more space-efficient by a factor of $|\Gamma|$ (the stack alphabet size); +- We can speed up our algorithm to be more time-efficient by a factor of more than $|Q|$ (the number of states), but without the space-complexity improvement; +- On the same PDAs that Lang considers, which we call simple, our sped-up algorithm is more efficient by a factor of $|\Gamma|$ in space and $|Q| \cdot |\Gamma|$ in time. + +Compared with the pipeline of standard procedures for converting a top-down PDA to a CFG, converting to Chomsky normal form, and parsing with CKY, our top-down algorithm is faster by a factor of more than $O(|Q|^3)$ . + +Finally, we present iterative algorithms for computing the total weight of all runs of a WPDA. + +# 2 Weighted Pushdown Automata + +# 2.1 Preliminaries + +Let $[i:j]$ denote the sequence of integers $(i,\ldots ,j)$ . If $s$ is a string, we write $|s|$ for the length of $s$ , $s_i$ for the $i^{\text{th}}$ symbol of $s$ , and $s(i:j]$ for the substring $s_{i+1}\cdots s_j$ . + +Definition 1. A monoid is a tuple $(A, \odot, I)$ , where $A$ is a set, $\odot$ is an associative binary operation, and $I \in A$ , called the identity element, satisfies $I \odot a = a \odot I = a$ for all $a \in A$ . If $a \odot b = b \odot a$ for all $a, b$ , we say that the monoid is commutative. + +Definition 2. A semiring is a tuple $\mathcal{W} = (A, \oplus, \otimes, \mathbf{0}, \mathbf{1})$ such that $(A, \oplus, \mathbf{0})$ is a commutative monoid and $(A, \otimes, \mathbf{1})$ is a monoid. Additionally, $\otimes$ distributes over $\oplus$ , that is, $a \otimes (b \oplus c) = a \otimes b \oplus a \otimes c$ and $(a \oplus b) \otimes c = a \otimes c \oplus b \otimes c$ , and $\mathbf{0}$ is absorbing with respect to $\otimes$ , that is, $\mathbf{0} \otimes a = a \otimes \mathbf{0} = \mathbf{0}$ . If $\otimes$ is commutative then we say that $\mathcal{W}$ is commutative. + +We also sometimes assume $\mathcal{W}$ is continuous; please see the survey by Droste and Kuich (2009) for a definition. + +# 2.2 Definition + +Our definition of weighted PDA is more general than usual definitions, in order to accommodate the top-down and bottom-up variants introduced in §2.3. It is roughly a weighted version of extended PDAs of Aho and Ullman (1972, p. 173) and the PDAs of Lewis and Papadimitriou (1997, p. 131). + +Definition 3. A weighted pushdown automaton (WPDA) over a semiring $\mathcal{W} = (A, \oplus, \otimes, \mathbf{0}, \mathbf{1})$ is a tuple $\mathcal{P} = (Q, \Sigma, \Gamma, \delta, (\iota, \gamma_{I}), (f, \gamma_{F}))$ , where: + +- $Q$ is a finite set of states; +- $\Sigma$ is a finite set of input symbols, called the input alphabet; +- $\Gamma$ is a finite set of stack symbols, called the stack alphabet; +- $\delta \colon Q \times \Gamma^{*} \times (\Sigma \cup \{\varepsilon\}) \times Q \times \Gamma^{*} \to A$ is called the transition weighting function; +- $(\iota, \gamma_{I})$ is called the initial configuration and $(f, \gamma_{F})$ is called the final configuration, where $\iota, f \in Q$ and $\gamma_{I}, \gamma_{F} \in \Gamma^{*}$ . + +Stacks are represented as strings over $\Gamma$ , from bottom to top. Thus, in the stack $\gamma = X_1X_2\cdots X_n$ , the symbol $X_1$ is at the bottom of the stack, while $X_n$ is at the top. + +Definition 4. A configuration of a WPDA is a pair $(q, \gamma)$ , where $q \in Q$ is the current state and $\gamma \in \Gamma^{*}$ is the current contents of the stack. + +The initial and final configurations of a WPDA are examples of configurations; it is possible to generalize the initial and final stacks to (say) regular expressions over $\Gamma$ , but the above definition suffices for our purposes. + +A WPDA moves from configuration to configuration by following transitions of the form $q, \gamma_1 \xrightarrow{a/w} r, \gamma_2$ , which represents a move from the state $q$ to state $r$ , while popping the sequence of symbols $\gamma_1 \in \Gamma^*$ from the top of the stack and pushing the sequence $\gamma_2 \in \Gamma^*$ . + +Definition 5. If $\delta(p, \gamma_1, a, q, \gamma_2) = w$ , then we usually write $\delta(p, \gamma_1 \xrightarrow{a} q, \gamma_2) = w$ or that $\delta$ has transition $(q, \gamma_1 \xrightarrow{a/w} p, \gamma_2)$ . We sometimes let $\tau$ stand for a transition, and we define $\delta(\tau) = w$ . We say that $\tau$ scans $a$ , and if $a \neq \varepsilon$ , we call $\tau$ scanning; otherwise, we call it non-scanning. We say that $\tau$ is $k$ -pop, $l$ -push if $|\gamma_1| = k$ and $|\gamma_2| = l$ . + +Definition 6. If $(q_{1},\gamma \gamma_{1})$ and $(q_{2},\gamma \gamma_{2})$ are configurations, and $\tau$ is a transition $q_{1},\gamma_{1}\xrightarrow{a/w} q_{2},\gamma_{2}$ , we write $(q_{1},\gamma \gamma_{1})\Rightarrow_{\tau}(q_{2},\gamma \gamma_{2})$ . + +Definition 7. A run of a WPDA $\mathcal{P}$ is a sequence of configurations and transitions + +$$ +\pi = \left(q _ {0}, \gamma_ {0}\right), \tau_ {1}, \left(q _ {1}, \gamma_ {1}\right), \dots , \tau_ {n}, \left(q _ {n}, \gamma_ {n}\right) +$$ + +where, for $i = 1,\dots ,n$ we have $(q_{i - 1},\gamma_{i - 1})\Rightarrow \tau_i$ $(q_{i},\gamma_{i})$ . (Sometimes it will be convenient to treat $\pi$ as a sequence of only configurations or only + +transitions.) A run is called accepting if $(q_0, \gamma_0)$ is the initial configuration and $(q_n, \gamma_n)$ is the final configuration. If, for $i = 1, \dots, n$ , $\tau_i$ scans $a_i$ , then we say that $\pi$ scans the string $a_1 \cdots a_n$ . We write $\Pi(\mathcal{P}, s)$ for the set of runs that scan $s$ and $\Pi(\mathcal{P})$ for the set of all accepting runs of $\mathcal{P}$ . + +# 2.3 Subclasses of PDAs + +Next, we define two special forms for WPDAs, which we call top-down and bottom-up, because they can be used as top-down and bottom-up parsers for CFGs, respectively. The most common definition of PDA (Hopcroft et al., 2006; Autebert et al., 1997) corresponds to top-down PDAs, $^{1}$ while the machine used in an LR parser (Knuth, 1965) corresponds to bottom-up PDAs. + +Definition 8. A WPDA is called bottom-up if it has only 1-push transitions. Moreover, the initial configuration is $(\iota, \varepsilon)$ and the final configuration is $(f, S)$ for some $\iota, f \in Q$ and $S \in \Gamma$ . + +Proposition 1. Every WPDA is equivalent to some bottom-up WPDA. + +Proof. Add states $\iota', f'$ and a stack symbol $S'$ , and make $(\iota', \varepsilon)$ and $(f', S')$ the new initial and final configurations, respectively. Add transitions + +$$ +\iota^ {\prime}, \varepsilon \xrightarrow {\varepsilon / 1} \iota , S ^ {\prime} \gamma_ {I} +$$ + +$$ +f, S ^ {\prime} \gamma_ {F} \xrightarrow {\varepsilon / 1} f ^ {\prime}, S ^ {\prime}. +$$ + +For each $k$ -pop, $l$ -push transition $p,\gamma \xrightarrow{a/w} r, X_1 \cdots X_l$ where $l > 1$ , create $(l - 1)$ new states $q_1, \ldots, q_{l-1}$ and replace the transition with + +$$ +p, \varepsilon \xrightarrow {\varepsilon / 1} q _ {1}, X _ {1} +$$ + +$$ +q _ {i - 1}, \varepsilon \xrightarrow {\varepsilon / 1} q _ {i}, X _ {i} \quad i = 2, \dots , l - 1 +$$ + +$$ +q _ {k - 1}, \gamma \xrightarrow {a / w} r, X _ {k}. +$$ + +For each $k$ -pop, 0-push transition $q, \gamma \xrightarrow{a/w} p, \varepsilon$ , replace it with the $(k+1)$ -pop, 1-push transitions $q, X\gamma \xrightarrow{a/w} p, X$ for every $X \in \Gamma \cup \{S'\}$ . + +If the original WPDA had transitions that push at most $l$ symbols, the resulting WPDA has $O(l \cdot |\delta| \cdot |Q|)$ states and $O((l + |\Gamma|) \cdot |\delta|)$ transitions. + +Definition 9. A WPDA is called top-down if it has only 1-pop transitions. Moreover, the initial configuration is $(\iota, S)$ and the final configuration is $(f, \varepsilon)$ for some $\iota, f \in Q$ and $S \in \Gamma$ . + +Proposition 2. Every WPDA is equivalent to some top-down WPDA. + +Proof. Similar to the bottom-up case. + +![](images/03209f4443fe0579ac0ee5321ebed4c875971fc0a95bbe36a902293ecdc984b0.jpg) + +This conversion crucially makes use of nondeterminism to guess the top $k$ stack symbols. Aho and Ullman (1972, p. 174) give a different algorithm that uses the state to keep track of the top $k$ stack symbols. Although this does not require nondeterminism, it creates $O(|\Gamma|^k \cdot |Q|)$ states. + +Finally, Lang (1974) considers a still more restricted subclass of PDAs.2 + +Definition 10. A WPDA is called simple if it only has $k$ -pop, $l$ -push transitions for $k \leq 1$ and $l \leq 1$ . + +Because simple PDAs do not condition pushes on the top stack symbol, they can be weighted, but not probabilistic. + +# 2.4 Stringsums and Allsums + +Definition 11. The weight $\mathbf{w}(\pi)$ of a run $\pi \in \Pi (\mathcal{P})$ is the product of the weights of its transitions, + +$$ +\mathbf {w} (\boldsymbol {\pi}) \stackrel {{\mathrm {d e f}}} {{=}} \bigotimes_ {\tau \in \boldsymbol {\pi}} \delta (\tau). +$$ + +Definition 12. The stringsum $\mathbf{w}(\mathcal{P}, s)$ of a string $s$ for a WPDA $\mathcal{P}$ is the total weight of all runs of $\mathcal{P}$ that scan $s$ , + +$$ +\mathbf{w}(\mathcal{P},s)\stackrel {\mathrm{def}}{=}\bigoplus_{\substack{\pi \in \Pi (\mathcal{P},s)}}\mathbf{w}\left(\boldsymbol {\pi}\right). +$$ + +Definition 13. The allsum $\mathbf{w}(\mathcal{P})$ of a WPDA $\mathcal{P}$ is the total weight of all runs of $\mathcal{P}$ , + +$$ +\mathbf{w}(\mathcal{P})\stackrel {\text{def}}{=}\bigoplus_{\pi \in \Pi (\mathcal{P})}\mathbf{w}(\pi). +$$ + +# 2.5 Push and Pop Computations + +Our algorithms for bottom-up WPDAs make heavy use of push computations. Intuitively, a push computation is a run that pushes exactly one symbol without touching the stack symbols below it. + +![](images/176085f28dc53051d952854dd697bf2cfee103c7c3151ebcf0cb8ec7aec4e617.jpg) +Figure 2: A push computation is a sequence of transitions that pushes exactly one symbol $(X)$ without touching the stack symbols below $(\gamma)$ . The curly edges indicate sequences of transitions (which are themselves push computations) while the straight edge indicates a single transition. + +Definition 14 (Push computation). Let $\mathcal{P}$ be a bottom-up WPDA and $s\in \Sigma^{*}$ an input string. A push computation of type $[i,p,X,j,q]$ , where $0\leq i\leq j\leq |s|$ , $p,q\in Q$ , and $X\in \Gamma$ , is a run $\pi = (q_0,\gamma_0),\ldots ,(q_m,\gamma_m)$ that scans $s(i:j]$ , where $\gamma_{m} = \gamma_{0}X$ , $q_{0} = p$ , $q_{m} = q$ , and for all $l > 0$ , $|\gamma_l|\ge |\gamma_m|$ . + +Fig. 2 shows an example of a push computation. Notice that this push of $X$ might be the result of possibly many transitions that can manipulate the stack. Every symbol other than $X$ that is pushed onto the stack during this computation must be popped later by another transition. + +The mirror image of a push computation is a pop computation, used in algorithms for top-down WPDAs; we defer its definition to §5. + +# 3 Normal Form + +In this section we present a series of semantics-preserving transformations for converting an arbitrary pushdown automaton into a normal form that is analogous to Chomsky normal form for context-free grammars. This will help us obtain a fast algorithm for computing stringsums. + +Definition 15. A bottom-up WPDA is in normal form if all of its scanning transitions are $k$ -pop, 1-push for $k \leq 2$ , and all of its non-scanning transitions are 2-pop, 1-push. Similarly, a top-down WPDA is in normal form if all of its scanning transitions are 1-pop, $k$ -push for $k \leq 2$ , and all of its non-scanning transitions are 1-pop, 2-push. + +# 3.1 Binarization + +Recall that top-down and bottom-up WPDAs have 1-pop, $k$ -push transitions and $k$ -pop, 1-push transitions, respectively. Since the runtime of our string-sum algorithm depends highly on $k$ , we convert the WPDA into an equivalent one with $k \leq 2$ . We call this procedure binarization because it is entirely analogous to binarization in CFGs. It is symmetric for top-down and bottom-up WPDAs. + +Proposition 3. Every bottom-up WPDA is equivalent to a bottom-up WPDA whose transitions are $k$ -pop, 1-push where $k \leq 2$ . + +Proof. For each $k$ -pop, 1-push transition $p, Y_1 \cdots Y_k \xrightarrow{a/w} q, X$ such that $k > 2$ we introduce $k - 2$ new states $r_1, \dots, r_{k-2}$ and we replace the original transition with the following: + +$$ +\begin{array}{l} p, Y _ {1} Y _ {2} \xrightarrow {a / w} r _ {1}, Y _ {2} \\ r _ {i - 1}, Y _ {i} Y _ {i + 1} \xrightarrow {\varepsilon / 1} r _ {i}, Y _ {i + 1} \quad i \in [ 2: k - 2 ] \\ r _ {k - 2}, Y _ {k - 1} Y _ {k} \xrightarrow {\varepsilon / 1} q, X. \\ \end{array} +$$ + +If the original WPDA had transitions that pop at most $k$ symbols, the resulting WPDA has $O(k \cdot |\delta| \cdot |Q|)$ states and $O(k \cdot |\delta|)$ transitions. + +Proposition 4. Every top-down WPDA is equivalent to a top-down WPDA whose transitions are 1-pop, $k$ -push where $k \leq 2$ . + +# 3.2 Nullary Removal + +In this section, we discuss the removal of nullary transitions from WPDAs: + +Definition 16. In a bottom-up WPDA, a transition is called nullary if it is of the form $p, \varepsilon \xrightarrow{\varepsilon / w} q, X$ . + +Although nullary transitions are analogous to nullary productions in a CFG, the standard procedure for removing nullary productions from CFGs does not have an exact analogue for PDAs, and the procedure we describe here is novel. + +We assume a bottom-up WPDA, but an identical construction exists for top-down WPDAs. We also assume that the WPDA has been binarized, and semiring $\mathcal{W}$ is commutative and continuous. + +The construction consists of three steps: partitioning, precomputation, and removal. + +Partitioning. For every symbol $X \in \Gamma$ , we replace $X$ with two stack symbols $X^{\varepsilon}$ and $X^{\not\varepsilon}$ . A push computation that pushes a $X^{\varepsilon}$ scans $\varepsilon$ , and a + +push computation that pushes a $X^{\phi}$ scans a string that is not $\varepsilon$ . To do this, we replace every $k$ -pop transition $p, X_1 \cdots X_k \xrightarrow{a/w} q, Y$ with $2^k$ new transitions $p, X_1^{\nu_1} \cdots X_k^{\nu_k} \xrightarrow{a/w} q, Y^{\nu}$ , where $\nu = \varepsilon$ iff $\nu_i = \varepsilon$ for all $i$ and $a = \varepsilon$ . For instance, we replace transition $p, XY \xrightarrow{\varepsilon/w} q, Z$ with the following $2^2 = 4$ transitions + +$$ +p, X ^ {\varepsilon} Y ^ {\varepsilon} \xrightarrow {\varepsilon / w} q, Z ^ {\varepsilon} \qquad p, X ^ {\not \varepsilon} Y ^ {\varepsilon} \xrightarrow {\varepsilon / w} q, Z ^ {\not \varepsilon} +$$ + +$$ +p, X ^ {\varepsilon} Y ^ {\not \varepsilon} \xrightarrow {\varepsilon / w} q, Z ^ {\not \varepsilon} \qquad p, X ^ {\not \varepsilon} Y ^ {\not \varepsilon} \xrightarrow {\varepsilon / w} q, Z ^ {\not \varepsilon}. +$$ + +Precomputation. We compute the weight of all non-scanning push computations by solving the quadratic system of equations: + +$$ +\begin{array}{l} N _ {p X q} = \delta (p, \varepsilon \stackrel {\varepsilon} {\longrightarrow} q, X) \\ \oplus \bigoplus_ {Y, r} N _ {p Y r} \otimes \delta (r, Y \xrightarrow {\varepsilon} q, X) \\ \oplus \bigoplus_ {Y, Z, s} N _ {p Y Z s} \otimes \delta (s, Y Z \xrightarrow {\varepsilon} q, X) \\ N _ {p Y Z s} = \bigoplus_ {r} N _ {p Y r} \otimes N _ {r Z s}. \\ \end{array} +$$ + +See §6 for details on solving such systems of equations, which assumes that $\mathcal{W}$ is continuous. Then $N_{pXq}$ is the total weight of all push computations of type $[i,p,X,q,i]$ for any $i$ . + +Removal. First, delete every transition that pushes $X^{\varepsilon}$ for each $X \in \Gamma$ . If the PDA accepts $\varepsilon$ with weight $w$ , add $\iota, \varepsilon \xrightarrow{\varepsilon / w} f, S^{\varepsilon}$ as the sole nullary transition. (For correctness, we must also ensure that no transition pops $S^{\varepsilon}$ , no transition enters $\iota$ , and no transition leaves $f$ .) + +Sometimes an $X^{\varepsilon}$ is popped immediately after it is pushed (that is, with no input symbols scanned between the push and the pop). To handle these cases, for the following transitions, we create new versions in which popped $X^{\varepsilon}$ symbols are removed, and their corresponding weight multiplied in. + +For each: + +$$ +\begin{array}{l} p, Y ^ {\varepsilon} \xrightarrow {a / w} q, X ^ {\not \epsilon} \\ p, Y ^ {\varepsilon} Z ^ {\varepsilon} \xrightarrow {a / w} q, X ^ {\notin} \\ p, Y ^ {\not \varepsilon} Z ^ {\varepsilon} \xrightarrow {a / w} q, X ^ {\not \varepsilon} \\ \end{array} +$$ + +(Note that $a \in \Sigma \cup \{\varepsilon\}$ , but the partitioning step only allows $a = \varepsilon$ for the third type above.) + +However, we have missed one type of transition, those of the form $p, Y^{\varepsilon}Z^{\notin} \xrightarrow{a/w} q, X^{\notin}$ . Create + +new stack symbols $_{rs}Z$ for all $r, s \in Q$ and $Z \in \Gamma$ . This stands for a sequence of zero or more non-scanning push computations that goes from state $r$ to $s$ , followed by a push computation that pushes $Z$ . The transition that pushes $Z$ must be a 0-pop transition, because all other transitions expect a symbol of the form $X^{\phi}$ on the top of the stack. So we modify (again) the 0-pop transitions to first simulate zero or more nullary transitions: + +For each: Replace with $(\forall s\in Q)$ + +$$ +\begin{array}{l} t, \varepsilon \xrightarrow {a / N _ {t Y P} \otimes w} q, X ^ {\notin} \quad s, \varepsilon \xrightarrow {a / N _ {t Y P} \otimes w} q, _ {s t} X \\ t, \varepsilon \xrightarrow {a / N _ {t Y Z P} \otimes w} q, X ^ {\not \in} s, \varepsilon \xrightarrow {a / N _ {t Y Z P} \otimes w} q, _ {s t} X \\ \end{array} +$$ + +And for each transition of the form $p, Y^{\varepsilon}Z^{\phi} \xrightarrow{a/w} q, X^{\phi}$ (where $a \in \Sigma \cup \{\varepsilon\}$ ), we create transitions for all $r, s, t \in Q$ : + +$$ +p _ {, r t} Z \xrightarrow {a / N _ {s Y t} \otimes w} q _ {, r s} X. +$$ + +(This step is where commutativity is needed.) Finally, add transitions to remove the state annotations, for all $p,X,q$ : + +$$ +q, _ {p p} X \xrightarrow {\varepsilon / 1} q, X ^ {\notin}. +$$ + +# 3.3 Unary Removal + +The final step in conversion to normal form is removal of unary transitions, so called by analogy with unary productions in a CFG. + +Definition 17. A transition is called unary if it is of the form $p, Y \xrightarrow{\varepsilon / w} q, X$ . + +We assume that $\mathcal{W}$ is equipped with a star operation satisfying $a^* = \mathbf{1} \oplus a \otimes a^* = \mathbf{1} \oplus a^* \otimes a$ . If $\mathcal{W}$ is continuous, then $a^* = \bigoplus_{i=0}^{\infty} a^i$ . + +Unary transitions can form cycles that can be traversed an unbounded number of times, which is problematic for a dynamic programming algorithm. Therefore, we precompute the weights of all runs of unary transitions. Define the matrix $U \in \mathcal{W}^{(Q \times \Gamma) \times (Q \times \Gamma)}$ : + +$$ +U _ {p Y, q X} = \delta (p, Y \xrightarrow {\varepsilon} q, X) +$$ + +and form its transitive closure $U^{*}$ (Lehmann, 1977). Then $U_{pY,qX}^{*}$ is the total weight of all runs of unary transitions from configuration $(p,Y)$ to $(q,X)$ . + +Then we remove all unary transitions and modify every non-unary transition as follows: + +For each non-unary: Replace with: + +$$ +p, \gamma \xrightarrow {a / w} q, X \quad p, \gamma \xrightarrow {a / w \otimes U _ {q X , r Y} ^ {*}} r, Y +$$ + +We give details on the complexity of this transformation in App. A. + +Item form + +$$ +\left[ i, p, X, j, q \right] \quad 0 \leq i \leq j \leq n +$$ + +$$ +p, q \in Q; X \in \Gamma +$$ + +Inference rules + +$$ +\begin{array}{c c} \underline {{[ i , p , Y , k , r ]}} & [ k, r, Z, j - | a |, s ] \\ \hline [ i, p, X, j, q ] & \end{array} \quad s, Y Z \xrightarrow {a / w} q, X +$$ + +$$ +\frac {[ i , p , Y , j - 1 , r ]}{[ i , p , X , j , q ]} \qquad \qquad r, Y \xrightarrow {s _ {j} / w} q, X +$$ + +$$ +\begin{array}{c} \overline {{[ i , p , X , j , q ]}} \end{array} \qquad \qquad \begin{array}{c} p, \varepsilon \xrightarrow {s _ {j} / w} q, X \\ j = i + 1 \end{array} +$$ + +Goal + +$$ +[ 0, \iota , S, n, f ] +$$ + +Figure 3: Deductive system for stringsums of bottom-up WPDAs in normal form. + +# 4 Stringsums in Bottom-up WPDAs + +In this section, we give dynamic programming algorithms for computing the stringsum of an input string $s$ (with $|s| = n$ ) of bottom-up WPDAs in normal form. We give a basic version of the algorithm, which has the same runtime as Lang's algorithm but improved space requirements, and a fast version that has the same space complexity and runs asymptotically faster. On simple PDAs (for which Lang's algorithm was designed), the latter version has both improved space and time complexity. + +# 4.1 Basic Algorithm + +The algorithm computes stringsums efficiently by exploiting the structural similarities among the WPDA runs. Fig. 3 shows a deductive system (Shieber et al., 1995; Goodman, 1999) for deriving items corresponding to push computations. + +The items have the form $[i, p, X, j, q]$ for $p, q \in Q$ , $X \in \Gamma$ , $0 \leq i \leq j \leq n$ . If our algorithm derives this item with weight $w$ , then the push computations of type $[i, p, X, j, q]$ have total weight $w$ . + +We distinguish three categories of push computations, based on their final transition, and we include an inference rule for each. First are those consisting of a single 0-pop, 1-push transition. The other two categories are those ending in a 1-pop transition and a 2-pop transition, respectively. These can be built recursively from shorter push computations. + +The goal item is $[0, \iota, S, n, f]$ , which stands for all runs from the initial configuration to the final configuration that scan $s$ . + +Alg. 1 shows how to compute item weights according to these rules. At termination, the weight of the goal item is the sum of the weights of all + +accepting runs that scan $s$ . + +Algorithm 1 Compute the weights of all push computations of a bottom-up WPDA on an input string. + +1. $\mathbf{w}\gets \mathbf{0}$ +2. $n\gets |s|$ +3. for $i\in [0:n - 1]$ .. +4. $j\gets i + 1$ +5. $\triangleright 0$ -pop, 1-push +6. for $(p,\varepsilon \xrightarrow{s_j / w} q,X)\in \delta$ .. +7. $\mathbf{w}[i,p,X,j,q]\gets w$ +8. for $\ell \in [2:n]$ .. +9. for $i\in [0:n - \ell +1]$ .. +10. $j\gets i + \ell$ +11. $\triangleright 1$ -pop, 1-push +12. for $p\in Q$ .. +13. for $(r,Y\xrightarrow{s_j / w} q,X)\in \delta$ .. +14. $\mathbf{w}[i,p,X,j,q]\oplus = \mathbf{w}[i,p,Y,j - 1,r]\otimes w$ +15. $\triangleright 2$ -pop, 1-push +16. for $p,r\in Q$ .. +17. for $(s,YZ\xrightarrow{a / w} q,X)\in \delta$ with $s(j - |a|:j] = a$ .. +18. for $k\in [i + 1:j - |a| - 1]$ .. +19. $\mathbf{w}[i,p,X,j,q]\oplus = (\mathbf{w}[i,p,Y,k,r]\otimes$ $\mathbf{w}[k,r,Z,j - |a|,s]\otimes w)$ +20. return $\mathbf{w}[0,\iota ,S,n,f]$ + +# 4.2 Correctness + +Theorem 1. Let $\mathcal{P}$ be a WPDA and $s\in \Sigma^{*}$ an input string. The weight $\pmb {w}[i,p,X,j,q]$ is the total weight of all push computations of $\mathcal{P}$ of type $[i,p,X,j,q]$ . + +Proof. By induction on the span length, $\ell = j - i$ . + +Base Case. Assume that $j - i = 1$ . The only push computations from state $p$ to $q$ that push $X$ and scan $s(i:j]$ are ones that have the single transition $\tau = p, \varepsilon \xrightarrow{s_j / w} q, X$ . There cannot exist others, because normal form requires that any additional non-scanning transitions would decrease the stack height. So the total weight of all such push computations is $w$ , and the algorithm correctly sets $\mathbf{w}[i,p,X,j,q] = w$ at line 7. + +Inductive Step. Assume that the statement holds for any spans of length at most $(\ell - 1)$ and consider a span of length $\ell$ . For such spans, the algorithm computes the total weight of all push computations $\pi$ of type $[i, p, X, j, q]$ , for all $X \in \Gamma$ , $p, q \in Q$ , and $j - i = \ell$ . This weight must be the sum of weights of three types of push computations: those that end with 0-pop transitions, with 1-pop transitions, and with 2-pop transitions. + +But ending with a 0-pop transition is impossible, because such push computations must have only + +one transition and therefore $j - i \leq 1$ . The 1-pop and 2-pop parts of the sum are computed at lines 12-14 and 16-19 of the algorithm, respectively. + +Ending with 1-pop transition. The algorithm sums over all possible ending transitions $\tau_{\mathrm{end}} = r, Y \xrightarrow{s_j / w} q, X$ . (Normal form requires that this transition be scanning.) Let $\Pi$ be the set of all push computations of type $[i, p, X, j, q]$ ending in $\tau_{\mathrm{end}}$ , and let $\Pi'$ be the set of all push computations of type $[i, p, Y, j - 1, r]$ . Every push computation in $\Pi$ must be of the form $\pi = \pi' \circ \tau_{\mathrm{end}}$ , where $\pi' \in \Pi'$ , and conversely, for every $\pi' \in \Pi'$ , we have $\pi' \circ \tau_{\mathrm{end}} \in \Pi$ . By the induction hypothesis, the total weight of $\Pi'$ was computed in a previous iteration. Then, by distributivity, we have: + +$$ +\begin{array}{l} \bigoplus_ {\pi \in \Pi} \bigotimes_ {\tau \in \pi} \delta (\tau) = \bigoplus_ {\pi^ {\prime} \in \Pi^ {\prime}} \bigotimes_ {\tau \in \pi^ {\prime}} \delta (\tau) \otimes \delta (\tau_ {\text {e n d}}) \\ = \left(\bigoplus_ {\pi^ {\prime} \in \Pi^ {\prime}} \bigotimes_ {\tau \in \pi^ {\prime}} \delta (\tau)\right) \otimes \delta (\tau_ {\text {e n d}}) \\ = \mathbf {w} [ i, p, Y, j - 1, r ] \otimes \delta (\tau_ {\text {e n d}}). \\ \end{array} +$$ + +Ending with 2-pop transition. The algorithm sums over all possible ending transitions $\tau_{\mathrm{end}} = s, YZ \xrightarrow{a/w} q, X, a \in \{s_j, \varepsilon\}$ . Every push computation $\pi$ that ends with $\tau_{\mathrm{end}}$ decomposes uniquely into $\pi' \circ \pi'' \circ \tau_{\mathrm{end}}$ , where $\pi'$ and $\pi''$ are push computations of type $[i, p, Y, k, r]$ and $[k, r, Z, j - |a|, s]$ , respectively, for some $k \in [i + 1:j - |a| - 1]$ and $r \in Q$ . We call $(k, r)$ the split point of $\pi$ . + +The algorithm sums over all split points $(k,r)$ . Let $\Pi$ be the set of all push computations of type $[i,p,X,j,q]$ ending in $\tau_{\mathrm{end}}$ with split point $(k,r)$ , and let $\Pi'$ and $\Pi''$ be the sets of all push computations of type $[i,p,Y,k,r]$ and $[k,r,Z,j - |a|,s]$ , respectively. Every $\pi \in \Pi$ must be of the form $\pi' \circ \pi'' \circ \tau_{\mathrm{end}}$ , where $\pi' \in \Pi'$ and $\pi'' \in \Pi''$ , and conversely, for every $\pi' \in \Pi'$ and $\pi'' \in \Pi''$ , $\pi' \circ \pi'' \circ \tau_{\mathrm{end}} \in \Pi$ . Because $i < k$ , we must have $j - |a| - k \leq j - k < j - i$ , and because $k < j - |a|$ , we must have $k - i < j - |a| - i \leq j - i$ . By the induction hypothesis, the total weight of $\Pi'$ and $\Pi''$ were fully computed in a previous iteration. As in the previous case, by distributivity we have + +$$ +\begin{array}{l} \bigoplus_ {\pi \in \Pi} \bigotimes_ {\tau \in \pi} \delta (\tau) = \mathbf {w} [ i, p, Y, k, r ] \\ \otimes \mathbf {w} [ k, r, Z, j - | a |, s ] \otimes \delta (\tau_ {\text {e n d}}). \\ \end{array} +$$ + +# 4.3 Stack Automaton + +The distribution over possible configurations that $\mathcal{P}$ can be in after reading $s(0:m]$ can be generated by + +Item form + +$$ +\begin{array}{l l} & 0 \leq i < j \leq n \\ [ i, p, X Y, j, q ] & p, q \in Q; X, Y \in \end{array} +$$ + +Inference rules + +$$ +\overline {{[ 0 , \iota , \mathbb {S} \mathbb {S} , 0 , \iota ]}} +$$ + +$$ +\begin{array}{c c} \frac {[ i , p , X Y , j - | a | , r ]}{[ i , p , X Y , j , q ]} & r, \varepsilon \xrightarrow {a / w} q, \varepsilon \\ & s (j - | a |: j ] = a \end{array} +$$ + +$$ +\begin{array}{c} \frac {[ k , r , Z X , i , p ]}{[ i , p , X Y , j , q ]} \hskip 2 8. 4 5 2 7 5 6 p t p, \varepsilon \xrightarrow {a / w} q, Y \\ s (i: j ] = a \end{array} +$$ + +$$ +\begin{array}{l} \frac {\left[ i , p , X Y , k , r \right] \quad \left[ k , r , Y Z , j - | a | , s \right]}{\left[ i , p , X Y , j , q \right]} \\ \begin{array}{c c} \frac {[ i , p , X Z , j - | a | , r ]}{[ i , p , X Y , j , q ]} & r, Z \xrightarrow {a / w} q, Y \\ & s (j - | a |: j ] = a \end{array} \\ \end{array} +$$ + +Goal + +$$ +[ 0, \iota , \mathbb {S} \mathbb {S}, n, f ] +$$ + +Figure 4: Deductive system for Lang's algorithm. + +a weighted finite-state automaton $M$ . The states of $M$ are of the form $(i,q)$ , with start state $(0,s)$ and accept states $(m,q)$ for all $q \in Q$ . There is a transition $(i,q) \xrightarrow{X/w} (j,r)$ for every item $[i,q,X,j,r]$ with weight $w$ . Then if an accepting run of $M$ scans $\gamma$ and ends in state $(m,q)$ with weight $w$ , then $\mathcal{P}$ can be in configuration $(q,\gamma)$ with weight $w$ . + +# 4.4 Complexity Analysis and Speedup + +For comparison with our algorithm, we show the original algorithm of Lang (1974) in Fig. 4. It has items of the form $[i,q,XY,j,r]$ , which stands for push computations that start with $X$ as the top stack symbol and push a $Y$ on top of it. + +Our algorithm stores a weight for each item $[i,p,X,j,q]$ , giving a space complexity of $O\left(n^{2}|\mathcal{Q}|^{2}|\Gamma |\right)$ . This is more efficient than Lang's algorithm, which requires $O\left(n^{2}|\mathcal{Q}|^{2}|\Gamma |^{2}\right)$ space. + +Computing the weight of each new item requires, in the worst case (the inference rule for 2-pop transitions), iterating over stack symbols $Y, Z \in \Gamma$ , indices $j \in [0:n]$ and states $q, r \in Q$ , resulting in a runtime of $O(n|Q|^2|\Gamma|^2)$ per item. So the algorithm has a runtime of $O(n^3|Q|^4|\Gamma|^3)$ , the same as Lang's algorithm. + +This runtime can be improved by splitting the + +inference rule for 2-pop transitions into two rules:3 + +$$ +\begin{array}{l} \begin{array}{c c} \frac {[ k , r , Z , j - | a | , s ]}{\langle k , r , Y \backslash X , j , q \rangle} & s, Y Z \xrightarrow {a / w} q, X \\ & s (j - | a |: j ] = a \end{array} \\ \begin{array}{c} \hline [ i, p, Y, k, r ] \quad \langle k, r, Y \backslash X, j, q \rangle \\ \hline [ i, p, X, j, q ] \end{array} \\ \end{array} +$$ + +The first rule has $O(n^{2}|Q|^{3}|\Gamma |^{3})$ instantiations and the second rule has $O(n^{3}|Q|^{3}|\Gamma |^{2})$ . So, although we have lost the space-efficiency gain, the total time complexity is now in $O((n^3 |\Gamma |^2 + n^2 |\Gamma |^3)|Q|^3)$ , a speedup of a factor of more than $|Q|$ . We show in App. B an alternative deductive system that achieves a similar speedup. + +Furthermore, Lang's algorithm only works on simple PDAs. To make the algorithms directly comparable, we can assume in the 2-pop, 1-push case that $X = Y$ . This reduces the space complexity by a factor of $|\Gamma|$ again. Moreover, it reduces the number of instantiations of the inference rules above to $O(n^{2}|Q|^{3}|\Gamma|^{2})$ and $O(n^{3}|Q|^{3}|\Gamma|)$ , respectively. So the total time complexity is in $O(n^{3}|Q|^{3}|\Gamma|^{2})$ , which is a speedup over Lang's algorithm by a factor of $|Q| \cdot |\Gamma|$ . + +# 5 Stringsums in Top-down WPDAs + +Stringsums of weighted top-down WPDAs can be computed by the left/right mirror image of our bottom-up algorithm. Instead of finding push computations, this algorithm finds pop computations, which decrease (rather than increase) the stack size by exactly one. + +Definition 18 (Pop computation). Let $\mathcal{P}$ be a top-down WPDA and $s\in \Sigma^{*}$ an input string. A pop computation of type $[i,p,X,j,q]$ , where $0\leq i\leq j\leq |s|$ , $p,q\in Q$ , and $X\in \Gamma$ , is a run $\pi = (q_0,\gamma_0),\ldots ,(q_m,\gamma_m)$ that scans $s(i:j]$ , where $q_{0} = p$ , $q_{m} = q$ , $\gamma_0 = \gamma_mX$ , and for all $l < m$ , $|\gamma_l|\ge |\gamma_0|$ . + +Fig. 5 shows the inference rules used by the dynamic program, which are the mirror image of the rules in Fig. 3. Each item $[i,p,X,j,q]$ , which stands for a set of pop computations, is derived using a transition and items corresponding to pop computations that happen later in the run. + +Item form + +$$ +\begin{array}{c c} [ i, p, X, j, q ] & 0 \leq i < j \leq n \\ & p, q \in Q; X \in \Gamma \end{array} +$$ + +Inference rules + +$$ +\begin{array}{c c} \underline {{[ i + | a | , r , Y , k , s ]}} & [ k, s, Z, j, q ] \\ \hline [ i, p, X, j, q ] & \end{array} \quad p, X \xrightarrow {a / w} r, Y Z s (i: i + | a | ] = a +$$ + +$$ +\frac {[ i + 1 , r , Y , j , q ]}{[ i , p , X , j , q ]} \quad p, X \xrightarrow {s _ {i + 1} / w} r, Y +$$ + +$$ +\begin{array}{c c} \overline {{[ i , p , X , j , q ]}} & p, X \xrightarrow {s _ {i + 1} / w} q, \varepsilon \\ & i = j - 1 \end{array} +$$ + +Goal + +$$ +[ 0, \iota , S, n, f ] +$$ + +Figure 5: Deductive system for stringsums of top-down WPDAs in normal form. + +# 5.1 Comparison with Lang's algorithm + +Since top-down PDAs are more standard, and the only direct PDA stringsum algorithm in the literature is Lang's algorithm, it might have seemed natural to extend Lang's algorithm to top-down PDAs, as is done by DuSell and Chiang (2020). Like Lang's algorithm, their algorithm has items of the form $[i,q,XY,j,r]$ , but unlike Lang's algorithm, it requires the $X$ in order to support 1-pop, 2-push transitions. As a result, their algorithm has space complexity $O(n^{2}|Q|^{2}|\Gamma |^{2})$ and time complexity $O(n^{3}|Q|^{4}|\Gamma |^{3})$ , like Lang's algorithm. But if they had used our algorithm for top-down WPDAs, using pop computations, they would have saved a factor of $|\Gamma|$ space, and because their 1-pop, 2-push transitions never change the popped symbol, they would have also saved a factor of $|Q|\cdot |\Gamma|$ time. + +# 5.2 Experiment + +To give a concrete example, we consider the renormalizing nondeterministic stack RNN (RNS-RNN) of DuSell and Chiang (2022), which uses Lang's algorithm (Fig. 4) on a top-down PDA. Since the RNN must process the string from left to right, we cannot use the bottom-up stringsum algorithm, but we can still apply the speedup of §4.4, splitting the 1-pop, 0-push rule of Fig. 4 into two rules. Again, this decreases the time complexity from $O(n^{3}|\mathcal{Q}|^{4}|\Gamma|^{3})$ to $O((n^{3}|\Gamma|^{2} + n^{2}|\Gamma|^{3})|\mathcal{Q}|^{3})$ . When we compare the two implementations on a corpus of strings whose lengths were drawn from [40, 80] on a NVIDIA GeForce RTX 2080 Ti GPU, when $|\mathcal{Q}| = 5$ and $|\Gamma| = 3$ , the new version is about 10 times as fast (Figure 6). + +![](images/83ad5f19dd03d772c2b392924fac037fc41f690bb68f731840ecda7884d7a8bd.jpg) + +![](images/3985e53b2ab7e76015d235fe6c3891823b4ff1a9ea08c71d69bda522c1644fcc.jpg) +Figure 6: Applying our speedup to the RNS-RNN, which uses Lang's algorithm adapted to top-down PDAs, yields dramatic time and space savings. + +# 5.3 Comparison with CFG/CKY + +We also compare our stringsum algorithm with converting a top-down PDA to a CFG and computing stringsums using CKY. The usual conversion from top-down normal form PDAs to CFGs (Hopcroft et al., 2006) creates a CFG with $O(|Q|^2 |\Gamma|)$ nonterminal symbols, so CKY would take $O(n^3 |Q|^6 |\Gamma|^3)$ time. Our algorithm thus represents a speedup of more than $|Q|^3$ . Of course, various optimizations could be made to improve this time, and in particular there is an optimization (Eisner and Blatz, 2007) analogous to the speedup in §4.4. + +# 6 Allsums in Bottom-up WPDAs + +We can use a notion of push computation similar to Def. 14 to derive a space-efficient algorithm for computing all sums in bottom-up WPDAs. The item $[p, X, q]$ stands for runs from state $p$ to state $q$ that push the symbol $X$ on top of the stack while leaving the symbols underneath unchanged. + +Definition 19 (Push computation). Let $\mathcal{P}$ be a bottom-up WPDA. A push computation of type $[p, X, q]$ , where $p, q \in Q$ , and $X \in \Gamma$ , is a run $\pi = (q_0, \gamma_0), \ldots, (q_n, \gamma_n)$ , where $q_0 = p$ , $q_n = q$ , $\gamma_n = \gamma_0 X$ , and for all $i > 0$ , $|\gamma_i| \geq |\gamma_n|$ . + +These items closely resemble those used for computing stringsums, but discard the two variables $i$ , $j$ that we used for indexing substrings of the input, as we are interested in computing the total weight of runs that scan any string. + +Definition 20. Let $\Pi (p,X,q)$ be the set containing all push computations from state $p$ to state $q$ + +that push $X$ . The allsum $\mathbf{w}[p,X,q]$ is defined as + +$$ +\mathbf{w}[p,X,q] = \bigoplus_{\boldsymbol {\pi}\in \Pi (p,X,q)}\mathbf{w}(\boldsymbol {\pi}). +$$ + +The allsum of a set of push computations can be expressed in terms of other allsums: + +$$ +\begin{array}{l} \mathbf {w} [ p, X, q ] = \bigoplus_ {a \in \Sigma \cup \{\varepsilon \}} \delta (p, \varepsilon \xrightarrow {a} q, X) \\ \oplus \bigoplus_{\substack{Y\in \Gamma \\ r\in Q\\ a\in \Sigma \cup \{\varepsilon \}}} \mathbf{w}[p,Y,r]\otimes \delta (r,Y\overset {a}{\to}q,X) \\ \oplus \bigoplus_{\substack{Y,Z\in \Gamma \\ r,s\in Q\\ a\in \Sigma \cup \{\varepsilon \}}} \mathbf{w}[p,Y,r]\otimes \mathbf{w}[r,Z,s]\otimes \delta (s,YZ\overset {a}{\to}q,X) \\ \end{array} +$$ + +In general, allsums cannot be computed recursively, as $\mathbf{w}[p,X,q]$ may rely on allsums that are yet to be computed. Instead, we assume that $\mathcal{W}$ is continuous and solve the system of nonlinear equations using fixed-point iteration or the semiring generalization of Newton's method (Esparza et al., 2007). + +This algorithm computes $O(|Q|^2 |\Gamma|)$ items. An allsum algorithm based on Lang's algorithm would have computed $O(|Q|^2 |\Gamma|^2)$ items; thus we have reduced the space complexity by a factor of $|\Gamma|$ . + +# 7 Conclusion + +Our study has contributed several results and algorithms whose weighted CFG analogues have long been known, but have previously been missing for weighted PDAs—a normal form analogous to Chomsky normal form and a stringsum algorithm analogous to weighted CKY. But it has also revealed some important differences, confirming that the study of weighted PDAs is of interest in its own right. Most notably, we identified two different normal forms and two corresponding stringsum algorithms (and two allsum algorithms). Since the only existing PDA stringsum algorithm we are aware of, Lang's algorithm, is better suited to bottom-up PDAs, whereas the more standard definition of PDAs is of top-down PDAs, our algorithm for top-down WPDAs fills a significant gap. + +# Acknowledgements + +This material is based upon work supported by the National Science Foundation under Grant No. CCF-2019291. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. + +# Limitations + +Removal of nullary transitions, while similar to removal of nullary rules from a WCFG, is conceptually more complicated, and while it has the same asymptotic complexity, it's currently unknown how the two would compare in practice. Additionally, our nullary removal construction requires a commutative semiring, while removal of nullary productions from a WCFG does not. + +Our algorithm for top-down WPDAs processes a string from right to left, which may be undesirable in some NLP applications and in models of human sentence processing. + +# Ethics Statement + +The authors foresee no ethical concerns with the research presented in this paper. + +# References + +Alfred V. Aho and Jeffrey D. Ullman. 1972. The Theory of Parsing, Translation, and Compiling, volume 1. Prentice-Hall. +Cyril Allauzen, Bill Byrne, Adrià de Gispert, Gonzalo Iglesias, and Michael Riley. 2014. Pushdown automata in statistical machine translation. Computational Linguistics, 40(3):687-723. +Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov, and Michael Collins. 2016. Globally normalized transition-based neural networks. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2442-2452. +Jean-Michel Autebert, Jean Berstel, and Luc Boasson. 1997. Context-free languages and pushdown automata. In Handbook of Formal Languages, volume 1, pages 111-174. +Y. Bar-Hillel, M. Perles, and E. Shamir. 1961. On formal properties of simple phrase structure grammars. Zeitschrift für Phonetik, Sprachwissenschaft und Kommunikationsforschung, 14:143-172. +Danqi Chen and Christopher Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 740-750. +Noam Chomsky. 1963. Formal properties of grammars. In Handbook of Mathematical Psychology, volume 2, pages 323-418. John Wiley & Sons. +Manfred Droste and Werner Kuich. 2009. Semirings and formal power series. In Handbook of Weighted Automata, pages 3-28. + +Brian DuSell and David Chiang. 2020. Learning context-free languages with nondeterministic stack RNNs. In Proceedings of the 24th Conference on Computational Natural Language Learning, pages 507-519. +Brian DuSell and David Chiang. 2022. Learning hierarchical structures with differentiable nondeterministic stacks. In International Conference on Learning Representations. +Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transition-based dependency parsing with stack long short-term memory. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 334-343. +Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016. Recurrent neural network grammars. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 199-209. +Jay Earley. 1970. An efficient context-free parsing algorithm. Communications of the ACM, 13(2):94-102. +Jason Eisner and John Blatz. 2007. Program transformations for optimization of parsing algorithms and other weighted logic programs. In Proceedings of the 11th Conference on Formal Grammar, pages 45-85. +Javier Esparza, Stefan Kiefer, and Michael Luttenberger. 2007. On fixed point equations over commutative semirings. In 24th Annual Symposium on Theoretical Aspects of Computer Science, pages 296-307. +R. James Evey. 1963. Application of pushdown-store machines. In AFIPS '63: Proceedings of the November 12-14, 1963, Fall Joint Computer Conference, pages 215-227. +Daniel Fernández-González and Carlos Gómez-Rodriguez. 2019. Left-to-right dependency parsing with pointer networks. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 710-716. +Joshua Goodman. 1999. Semiring parsing. Computational Linguistics, 25(4):573-606. +John E. Hopcroft, Rajeev Motwani, and Jeffrey D. Ullman. 2006. Introduction to Automata Theory, Languages, and Computation, 3rd edition. Addison-Wesley Longman Publishing Co. +Liang Huang, Wenbin Jiang, and Qun Liu. 2009. Bilingually-constrained (monolingual) shift-reduce parsing. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 1222-1231. + +Donald E. Knuth. 1965. On the translation of languages from left to right. Information and Control, 8(6):607-639. +Bernard Lang. 1974. Deterministic techniques for efficient non-deterministic parsers. In ICALP 1974: Automata, Languages and Programming, pages 255-269. +Daniel J. Lehmann. 1977. Algebraic structures for transitive closure. Theoretical Computer Science, 4(1):59-76. +Harry R. Lewis and Christos H. Papadimitriou. 1997. Elements of the Theory of Computation, 2nd edition. Prentice-Hall. +Xuezhe Ma, Zeong Hu, Jingzhou Liu, Nanyun Peng, Graham Neubig, and Eduard Hovy. 2018. Stackpointer networks for dependency parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1403-1414. +Joakim Nivre. 2003. An efficient algorithm for projective dependency parsing. In Proceedings of the Eighth International Conference on Parsing Technologies, pages 149-160. +Joakim Nivre. 2004. Incrementality in deterministic dependency parsing. In Proceedings of the Workshop on Incremental Parsing: Bringing Engineering and Cognition Together, pages 50-57. +Philip Resnik. 1992. Left-corner parsing and psychological plausibility. In COLING 1992 Volume 1: The 14th International Conference on Computational Linguistics, pages 191-197. +Brian Roark. 2001. Probabilistic top-down parsing and language modeling. Computational Linguistics, 27(2):249-276. +Marcel-Paul Schützenberger. 1963. On context-free languages and push-down automata. Information and Control, 6(3):246-264. +Tianze Shi, Liang Huang, and Lillian Lee. 2017. Fast(er) exact decoding and global training for transition-based dependency parsing via a minimal feature set. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 12-23. +Stuart M. Shieber, Yves Schabes, and Fernando C.N. Pereira. 1995. Principles and implementation of deductive parsing. Journal of Logic Programming, 24:3-36. +Michael Sipser. 2012. Introduction to the Theory of Computation, 3rd edition. Cengage Learning. +Andreas Stolcke. 1995. An efficient probabilistic context-free parsing algorithm that computes prefix probabilities. Computational Linguistics, 21(2):165-201. + +David Weiss, Chris Alberti, Michael Collins, and Slav Petrov. 2015. Structured training for neural network transition-based parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 323-333. +Yue Zhang and Stephen Clark. 2008. A tale of two parsers: Investigating and combining graph-based and transition-based dependency parsing. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 562-571. + +# A Details of Unary Removal + +Since $U$ is a $|Q|\times |\Gamma |$ matrix, computing its transitive closure takes $O((|Q||\Gamma |)^3)$ time. However, if we perform nullary removal first, the stack alphabet could grow by a factor of $|Q|^2$ because of the stack symbols ${}_{rs}Z$ , which would seem to make the transitive closure take $O((|Q|^3 |\Gamma |)^3)$ time. + +For comparison, if we converted the PDA to a CFG, it would have $O(|Q|^2 |\Gamma |)$ nonterminals, so computing the transitive closure of the unary rules would take $O((|Q|^2 |\Gamma |)^3)$ time. + +But the graph formed by the unary transitions can be decomposed into several strongly connected components (SCCs), many of which are identical, so the transitive closure can be sped up considerably. Define three matrices for three different forms of unary transitions: + +$$ +U _ {p t Z, q s X} ^ {1} = \delta (p, _ {r t} Z \stackrel {\varepsilon} {\longrightarrow} q, _ {r s} X) +$$ + +$$ +U _ {q p X, q X} ^ {2} = \delta (q, _ {p p} X \stackrel {\varepsilon} {\longrightarrow} q, X ^ {\not \phi}) +$$ + +$$ +U _ {p Y, q X} ^ {3} = \delta (p, Y ^ {\not \varepsilon} \stackrel {\varepsilon} {\longrightarrow} q, X ^ {\not \varepsilon}). +$$ + +There are no transitions of the form $p,Y^{\not\infty}\xrightarrow{\varepsilon/w}q_{,rs}X$ Note that in the first equation, the transition weight does not depend on $r$ so $r$ does not occur on the left-hand side. Then let + +$$ +V = U ^ {1 *} U ^ {2} U ^ {3 *} +$$ + +so that $V_{psY,qX}$ is the total weight of runs from configurations of the form $(p,_{rs}Y)$ to configurations of the form $(q,X^{\neq})$ . + +Finally, we remove the unary transitions and modify the non-unary transitions as follows: + +For each non-unary: Replace with: + +$$ +\begin{array}{l l} p, \gamma \xrightarrow {a / w} q, _ {r s} X & p, \gamma \xrightarrow {a / w \otimes U _ {q s X , t u Y} ^ {1 *}} t, _ {r u} Y \\ & p, \gamma \xrightarrow {a / w \otimes V _ {q s X , t Y}} t, Y ^ {\not \in} \end{array} +$$ + +$$ +p, \gamma \xrightarrow {a / w} q, X ^ {\not \epsilon} \quad p, \gamma \xrightarrow {a / w \otimes U _ {q X , r Y} ^ {3 *}} r, Y ^ {\not \epsilon} +$$ + +Since $V$ can be computed in $\mathcal{O}((|Q|^2 |\Gamma |)^3)$ time, the asymptotic time complexity of removing nullary and unary transitions is the same when performed directly on the WPDA as compared with converting to a WCFG and removing nullary and unary rules. + +Item form + +$$ +[ i, p, \gamma , j, q ] +$$ + +$$ +0 \leq i < j \leq n; p, q \in Q +$$ + +$$ +\gamma \in \Gamma^ {*}; | \gamma | \in [ 1: 2 ] +$$ + +Inference rules + +$$ +\frac {[ i , p , Y , k , r ] \quad [ k , r , Z , j ^ {\prime} , s ]}{[ i , p , Y Z , j ^ {\prime} , s ]} +$$ + +$$ +\frac {\left[ i , p , Y Z , j - | a | , s \right]}{\left[ i , p , X , j , q \right]} +$$ + +$$ +\frac {[ i , p , Y , j - 1 , r ]}{[ i , p , X , j , q ]} +$$ + +$$ +\overline {{[ i , p , X , j , q ]}} +$$ + +Goal + +$$ +[ 0, \iota , S, n, f ] +$$ + +$$ +\begin{array}{l} s, Y Z \xrightarrow {a / w} q, X \\ s (j - | a |: j ] = a \end{array} +$$ + +$$ +r, Y \xrightarrow {s _ {j} / w} q, X +$$ + +$$ +\begin{array}{c} p, \varepsilon \xrightarrow {s _ {j} / w} q, X \\ j = i + 1 \end{array} +$$ + +Figure 7: Deductive system corresponding to the alternative sped-up algorithm for stringsums in bottom-up WPDAs in normal form. + +# B Fast Bottom-up Stringsum Algorithm + +Fig. 7 shows an alternative deductive system for parsing in bottom-up WPDAs. The algorithm that can be derived from this deductive system achieves a runtime improvement by a factor of $|Q|$ and has the same space complexity as Lang's algorithm. This algorithm, however, does not achieve further time and space complexity improvements on the special type of automaton used by Lang. \ No newline at end of file diff --git a/algorithmsforweightedpushdownautomata/images.zip b/algorithmsforweightedpushdownautomata/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..831ce71c694c56ce950a965188fe2bec312d356c --- /dev/null +++ b/algorithmsforweightedpushdownautomata/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4ec268e204fc517625160cc5d79353d0777ae09e83115833981498d75068807e +size 385405 diff --git a/algorithmsforweightedpushdownautomata/layout.json b/algorithmsforweightedpushdownautomata/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..d8f4b90501fefebdd57b67f79d6daf93f9703ad8 --- /dev/null +++ b/algorithmsforweightedpushdownautomata/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b02139126f5ca042c577e7034cc288a73713c9927ada77d30d7aba1028b9e9a3 +size 840343 diff --git a/aligningrecommendationandconversationviadualimitation/0c4c4c87-0fbc-4a19-a996-7d382570498c_content_list.json b/aligningrecommendationandconversationviadualimitation/0c4c4c87-0fbc-4a19-a996-7d382570498c_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..ad6aafbf2acab0ad0b9efe18c708b991b74880f4 --- /dev/null +++ b/aligningrecommendationandconversationviadualimitation/0c4c4c87-0fbc-4a19-a996-7d382570498c_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:83818fbf87752c0556fc6c4d4cc7004e95c9b0979397d86f6669bc72bc813ea8 +size 92633 diff --git a/aligningrecommendationandconversationviadualimitation/0c4c4c87-0fbc-4a19-a996-7d382570498c_model.json b/aligningrecommendationandconversationviadualimitation/0c4c4c87-0fbc-4a19-a996-7d382570498c_model.json new file mode 100644 index 0000000000000000000000000000000000000000..7a09b51b9d98c9ca6911230a0172e288fc1e9447 --- /dev/null +++ b/aligningrecommendationandconversationviadualimitation/0c4c4c87-0fbc-4a19-a996-7d382570498c_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6fefeb727ec1a49d702792a8d06ebc70321c9998ba51aa971aa77fea6958b138 +size 110051 diff --git a/aligningrecommendationandconversationviadualimitation/0c4c4c87-0fbc-4a19-a996-7d382570498c_origin.pdf b/aligningrecommendationandconversationviadualimitation/0c4c4c87-0fbc-4a19-a996-7d382570498c_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..b6804223cb759f80733157cd08daa69de4e867ff --- /dev/null +++ b/aligningrecommendationandconversationviadualimitation/0c4c4c87-0fbc-4a19-a996-7d382570498c_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:baadaed2987bfcf868dba97bfbfcd128baa3febda55a715440f8e849f58ac21c +size 762170 diff --git a/aligningrecommendationandconversationviadualimitation/full.md b/aligningrecommendationandconversationviadualimitation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..6b763e5dc09592c8ee7886fefc1c3986d3c00a6c --- /dev/null +++ b/aligningrecommendationandconversationviadualimitation/full.md @@ -0,0 +1,369 @@ +# Aligning Recommendation and Conversation via Dual Imitation + +Jinfeng Zhou $^{1,2}$ , Bo Wang $^{1,2,\boxtimes}$ , Minlie Huang $^{3}$ , Dongming Zhao $^{4}$ , Kun Huang $^{4}$ , Ruifang He $^{1,2}$ , Yuexian Hou $^{1}$ + +1College of Intelligence and Computing, Tianjin University, Tianjin, China + +$^{2}$ State Key Laboratory of Communication Content Cognition, People's Daily Online, Beijing, China + +3The CoAI Group, DCST, Institute for Artificial Intelligence, State Key Lab of Intelligent Technology and Systems, + +3Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing 100084, China + +$^{4}$ AI Lab, China Mobile Communication Group Tianjin Co., Ltd. + +{jfzhou,bo_wang}@tju.edu.cn aihuang@tsinghua.edu.cn + +# Abstract + +Human conversations of recommendation naturally involve the shift of interests which can align the recommendation actions and conversation process to make accurate recommendations with rich explanations. However, existing conversational recommendation systems (CRS) ignore the advantage of user interest shift in connecting recommendation and conversation, which leads to an ineffective loose coupling structure of CRS. To address this issue, by modeling the recommendation actions as recommendation paths in a knowledge graph (KG), we propose DICR (Dual Imitation for Conversational Recommendation), which designs a dual imitation to explicitly align the recommendation paths and user interest shift paths in a recommendation module and a conversation module, respectively. By exchanging alignment signals, DICR achieves bidirectional promotion between recommendation and conversation modules and generates high-quality responses with accurate recommendations and coherent explanations. Experiments demonstrate that DICR outperforms the state-of-the-art models on recommendation and conversation performance with automatic, human, and novel explainability metrics. + +# 1 Introduction + +Conversational recommendation systems (CRS) (Liu et al., 2020; Li et al., 2022) aim to conduct recommendations during conversations with users (Gao et al., 2021). Compared with traditional recommendation systems (Wang et al., 2019; Xian et al., 2019), CRS's two main advantages are understanding user's dynamic interest during the conversation and making persuasive response with coherent explanations of the recommendation (Jannach et al., 2021). In both advantages, user interest shift plays an essential role. As the dialog in Fig. 1, the successful recommendation of "Iron Man 3" in the final response is achieved by tracking and reasoning the user interest shift of "The Avengers $\rightarrow$ Sci + +![](images/369f9570ddbf83df1477684284a644a270bcb9a2a5adfd83a81c7c619de210d3.jpg) +Figure 1: The interest shift process expressed in the conversation can guide the generation of explainable recommendation path. The explainable recommendation path, in turn, can guide the generation of explainable response containing accurate recommendations. Recommendation and conversation maximize mutual benefits in bidirectional guidance. + +$Fi \rightarrow$ Thor $\rightarrow$ Stan Lee $\rightarrow$ Iron Man 3". Furthermore, the final response is persuasive because of utilizing part of the user interest shift, i.e., "Thor $\rightarrow$ Stan Lee $\rightarrow$ Iron Man 3", as the explanation. + +Due to the limited context (Hayati et al., 2020), a recommendation module based on knowledge graph (KG) is helpful to track the user interest shift in conversation. As shown in Figure 1, formally corresponding with the paths in KG, the user interest shift in conversation not only guides the reasoning-based prediction of recommendation, but also guides the explanation generation in response. + +However, existing KG-enhanced CRS models (Chen et al., 2019; Zhou et al., 2020, 2021, 2022b; Zhang et al., 2022a) have not made full use of the user interest shift to tightly align the KG-based recommendation and conversation. Consequently, one issue is the less accurate recommendation due to using unrelated entities in conversation to support the recommendation instead of using coherent en + +tities in user interest shift. The other issue is the lack of explanation in response due to black-box representation of the user preference ignoring the explicit preference logic in user interest shift. + +To address these issues in CRS, we propose to align the explicit behaviors of recommendation reasoning and conversation process, which are described as the recommendation paths and interest shift paths, respectively. As in Figure 1, a recommendation path is an explicit path in KG consisting of explicit relations of entity nodes and ending with a predicted recommended entity; an interest shift path is an implicit path in dialog context consisting of implicit relations of entity words. The recommendation path and interest shift path are concrete manifestations of the user interest shift in KG and dialog, respectively. The sequence of interest entities shared by the two paths facilitates the alignment of recommendation reasoning and conversation process, which can be effectively achieved by imitation learning (Ho and Ermon, 2016). Therefore, we propose a dual imitation framework named DICR (Dual Imitation for Conversational Recommendation). DICR designs bidirectional alignment signals from dual imitation learning to improve the CRS by forcing the recommendation and conversation to behave similarly to the shared user interest shift. + +Precisely, in a conversation-aware recommendation module, to align the recommendation reasoning to the conversational user interest shift, the recommendation side of the dual imitation, i.e., path imitation, adopts adversarial reinforcement learning to make the recommendation reasoning policy imitate the user interest shift in conversation. The reasoned recommendation paths are provided to the conversation module as alignment signals. In a recommendation-aware conversation module, to align the conversation process to the recommendation paths, the conversation side of the dual imitation, i.e., knowledge imitation and semantic imitation, which refine weight distribution and semantic encoding of recommendation paths by imitating the human response and the utterance statement semantic of golden explanations, respectively. These two imitations also provide the recommendation module with the rewards as the alignment signals indicating how well the predicated recommendation paths consist to the conversation context. + +Our contributions are summarized as follows: + +(1) To the best of our knowledge, we are the first + +to adopt imitation learning in CRS to integrate recommendation and conversation tightly. We design a dual imitation framework named DICR, which aligns recommendation and conversation behavior and promotes bidirectional improvement, taking recommendation paths and conversational rewards from the dual imitation as alignment signals. + +(2) The dual imitation benefits the knowledge acquisition and semantic generation, promoting the accuracy of recommendation and significantly improving the explanations of recommendations in generated responses with coherent knowledge. +(3) Extensive experiments demonstrate that DICR outperforms the SOTA models on both recommendation and conversation performance with automatic, human and novel explainability metrics. + +# 2 Related Work + +Conversational recommendation systems (CRS) aim to obtain user interests through conversational interaction and make persuasive recommendations (Jannach et al., 2021; Gao et al., 2021; Ren et al., 2021). To track the user interest shift in conversation, an intuitive strategy is to ask related questions (Kostric et al., 2021; Zhang et al., 2022b) which leads to question-based CRS (Lei et al., 2020; Deng et al., 2021). Limited by predefined templates for asking and recommending, it is difficult for question-based CRS to flexibly adopt different contexts and converse in a human-like manner. + +Towards more flexible conversation, generation-based CRS (Li et al., 2018; Zhang et al., 2021; Liang et al., 2021) capture user interests from context and generate responses containing persuasive explanations for recommendation. Limited by sparse context and language complexity (Lu et al., 2021; Yang et al., 2022), it is challenging for generation-based CRS to track user interest shift in conversation. As a popular solution, KG-enhanced CRS (Moon et al., 2019; Ma et al., 2021; Zhou et al., 2022a) involve knowledge graphs about explicit relations among potential interest items. + +Although KG-enhanced CRS has achieved significant improvements, most approaches (Chen et al., 2019; Zhou et al., 2020, 2021, 2022b) adopt a black-box style to transfer implicit and sparse information between recommendation and conversation. The explicit interest reasoning path in KG and explicit interest shift path in conversation have a good chance to align the recommendation and conversation to benefit each other. + +![](images/4c41e1fd45716e6d2c8e662865056cb4a2273114be5d0cefb374dcb173c2c635.jpg) +Figure 2: The dual imitation architecture of the proposed DICR model. + +# 3 Problem Formalization + +A KG $\mathcal{G} = \{(e,r,e^{\prime})\mid e,e^{\prime}\in \mathcal{E},r\in \mathcal{R}\}$ , where $\mathcal{E}$ is entity set, and $\mathcal{R}$ is relation set. Each triplet $(e,r,e^{\prime})$ indicates that the head entity $e$ and the tail entity $e^{\prime}$ are connected by the relation $r$ . In this paper, a recommendation path is a multi-hop reasoning path $p$ on $\mathcal{G}$ : $p = \left\{e_0\stackrel {r_1}{\to}e_1\stackrel {r_2}{\to}\ldots \stackrel {r_t}{\to}e_t\right\}$ . + +Suppose we have a conversational recommendation corpus $\mathcal{D}$ parallel to a knowledge graph $\mathcal{G}$ , in which the interests (e.g., movies) mentioned in $\mathcal{D}$ are linked to the entities in $\mathcal{G}$ . $C = (c_{1}, c_{2}, \ldots, c_{n})$ is the conversation context, where $c_{i}$ is an utterance. $I$ is the set of recommendation items under $C$ . $Y = (y_{1}, y_{2}, \ldots, y_{m})$ is a response containing $I$ , where $y_{i}$ is a token. $K = \left\{e_{0} \xrightarrow{r_{1}^{K}} e_{1} \xrightarrow{r_{2}^{K}} \ldots \xrightarrow{r_{l}^{K}} e_{l}\right\}$ is a golden interest shift path connecting the interest entities $e_{0,1,\ldots,l}$ in $C$ and $Y$ . $K$ also matches a recommendation path $p$ in $\mathcal{G}$ . $K$ can be extracted by identifying entities in the conversation and linking the entities to the nodes in $\mathcal{G}$ . The logical utterance statement of $K$ is $U$ , which is the explanation of recommendation, e.g., given a one-hop reasoning path in $\mathcal{G}$ ("Thor", "written_by", "Stan Lee"), its tokenized $U$ can be "Thor is written by Stan Lee". + +In this paper, given a conversation context $C$ and a KG $\mathcal{G}$ , we aim to generate a response $Y$ containing the recommendation set $I$ and explanation $U$ . We design a novel CRS model, in which KG path reasoning obtains explicit reasoning path set $\mathcal{P}$ from $\mathcal{G}$ to help generate $Y$ containing recommendation set $I$ and coherent explanation $U$ of $I$ . + +# 4 Approach + +# 4.1 Architecture Overview + +As shown in Fig. 2, in DICR, the conversation-aware recommendation module learns recommendation path reasoning policy with adversarial reinforcement learning. The path imitation discriminator aligns the recommendation paths with the golden interest shift path to reward the agent $R_{p,t}$ to optimize the reasoning policy. As a result, the top tokenized recommendation paths are provided to the conversation module as alignment signals. In the recommendation-aware conversation module, the knowledge imitation aligns the prior and posterior recommendation knowledge in tokenized recommendation paths and human response, respectively. The semantic imitation uses Mutual Information Maximization (MIM) to align the semantic encoding of recommendation paths with that of the utterance statement of the golden interest shift path. These imitations refine distribution of knowledge and overall words and thus benefit the path-aware response generation. They also generate rewards $R_{k,t}$ and $R_{s,t}$ as alignment signals to guide the recommendation path reasoning. Finally, DICR performs a joint training to bidirectionally promote the recommendation and conversation with alignment signals from dual imitation. + +# 4.2 Conversation-aware Recommendation Module + +In this module, we formalize the user interest shift in conversation as a Markov Decision Process + +(MDP) (Sutton and Barto, 2018) through KG paths to reason interest shift path with adversarial reinforcement learning. We construct KG embeddings (Bordes et al., 2013) of each entity. The entities mentioned in $C$ are extracted by fuzzy matching, whose embeddings are averaged as the preference representation of user $u$ in the current context. + +State. We start path reasoning from the starting entity $e_0$ in $C$ . The initial state $s_0 \in S$ is $s_0 = \{u, e_0\}$ . We encode the $H$ -step history of entities and relations as the observed state $s_t \in S$ at step $t$ , i.e., $s_t = \{u, e_{t-H}, \ldots, r_t, e_t\}$ , whose embedding $s_t$ is obtained by concatenating the embeddings of all members of $s_t$ , i.e., $s_t = u \oplus e_{t-H} \ldots \oplus r_t \oplus e_t$ , where $u$ is preference representation, $\oplus$ is the concatenation operator. If the path length is smaller than $H$ , we pad $s_t$ with zeros. + +Action. The action space $\mathcal{A}_t$ of the state $s_t$ is defined as all outgoing edges of the entity $e_t$ in the KG $\mathcal{G}$ , excluding history entities and relations. $\mathcal{A}_t = \{(r,e) \mid (e_t,r,e) \in \mathcal{G}, e \notin \{e_0, \dots, e_{t-1}\}\}$ . As an option to terminate, $\mathcal{A}_t$ has a self-loop edge. + +Transition. Given the current state $s_t$ and the action chosen by the agent $a_t = (r_{t+1}, e_{t+1})$ , the next state $s_{t+1}$ is: $s_{t+1} = \mathcal{T}(s_t, a_t) = \{u, e_{t-H+1}, \ldots, r_{t+1}, e_{t+1}\}$ , where $\mathcal{T}: S \times \mathcal{A} \to S$ refers to the state transition function. + +Reward. We only give the agent terminal reward $R_{T,t}$ , $R_{T,t}$ is 1 if the agent generates a path ends with the recommended items $I_Y$ in the response $Y$ , and 0 otherwise. + +Policy Optimization. We adopt adversarial imitation learning (Zhao et al., 2020) based on the Actor-Critic method for policy optimization. Actor learns a path reasoning policy $\pi_{\varphi}(a_t, s_t, \mathcal{A}_t)$ which selects a "good" action $a_t$ based on the current state $s_t$ and the action space $\mathcal{A}_t$ to "fool" the discriminator in the path imitation. Critic estimates the value $Q_{\delta}(s_t, a_t)$ of each action $a_t$ in the situation of the state $s_t$ to guide the actor to choose a "good" action. We use two fully connected layers as the actor policy network: + +$$ +\pi_ {\varphi} \left(a _ {t}, s _ {t}, \mathcal {A} _ {t}\right) = \eta \left(\boldsymbol {A} _ {t} f \left(W _ {\varphi , 2} \left(f \left(W _ {\varphi , 1} \boldsymbol {s} _ {t}\right)\right)\right)\right), \tag {1} +$$ + +where $A_{t}$ denotes that the action space is encoded by stacking the embedding of all actions in $\mathcal{A}_t$ , and each embedding in $a_{t} \in \mathcal{A}_{t}$ is obtained by a lookup layer. $\eta(\cdot)$ is the softmax function, $f(\cdot)$ is the ELU activation function, $W_{\varphi,1}$ and $W_{\varphi,2}$ are learnable. We design a critic network as: + +$$ +Q _ {\delta} \left(s _ {t}, a _ {t}\right) = \boldsymbol {a} _ {\delta , t} \cdot f \left(W _ {\delta , 2} \left(f \left(W _ {\delta , 1} \boldsymbol {s} _ {t}\right)\right)\right), \tag {2} +$$ + +where $\pmb{a}_{\delta,t}$ is the embedding of action $a_{t}$ in the critic, $f(\cdot)$ is the ELU activation function, $W_{\delta,1}$ and $W_{\delta,2}$ are learnable. + +Path Imitation To guide the actor to generate a path in line with the user interest shift, we design the path imitation discriminator $\mathcal{I}_{p,\tau}$ , which judges whether the path segment generated by the actor at each step $t$ is similar to the golden interest shift path segment in current context. Given the current state $s_t$ and action $a_t$ , the probability that $\mathcal{I}_{p,\tau}$ outputs $(s_t, a_t)$ conforms to the golden shift path segment: + +$$ +\mathcal {I} _ {p, \tau} (s _ {t}, a _ {t}) = \sigma \left(\boldsymbol {b} _ {p, \tau} ^ {T} z \left(W _ {p, \tau} z \left(\boldsymbol {s} _ {t} \oplus \boldsymbol {a} _ {p, t}\right)\right)\right), \tag {3} +$$ + +where $\pmb{a}_{p,t}$ is the embedding of $a_{t}$ in $\mathcal{I}_{p,\tau}$ , $z(\cdot)$ is tanh function and $\sigma(\cdot)$ is sigmoid function. $W_{p,\tau}$ and $\pmb{b}_{p,\tau}$ are learnable. $\mathcal{I}_{p,\tau}$ is learned by minimizing the loss function $L_{\tau}$ : + +$$ +L _ {\tau} = - \left(\log \left(1 - \mathcal {I} _ {p, \tau} \left(s _ {t}, a _ {t}\right)\right) + \log \left(\mathcal {I} _ {p, \tau} \left(s _ {t} ^ {K}, a _ {t} ^ {K}\right)\right)\right) \tag {4} +$$ + +where $s_t^K$ and $a_t^K$ respectively denote the state and action of the golden shift process in the same step t. We further obtain the reward $R_{p,t}$ given by $\mathcal{I}_{p,\tau}$ to the actor at each step t: + +$$ +R _ {p, t} = \log \left(\mathcal {I} _ {p, \tau} \left(s _ {t}, a _ {t}\right)\right) - \log \left(1 - \mathcal {I} _ {p, \tau} \left(s _ {t}, a _ {t}\right)\right). \tag {5} +$$ + +Here, the aggregation reward obtained by the agent is $R_{t} = \alpha R_{p,t} + (1 - \alpha)R_{T,t}$ . where $\alpha \in [0,1]$ . In the final joint training with the conversation module, the agent receives two other rewards $R_{k,t}$ and $R_{s,t}$ from the conversation module. Given $Q_{\delta}(s_t,a_t)$ , the actor and critic is updated jointly by minimizing the loss function $L_{\varphi,\delta}$ : + +$$ +L _ {\varphi , \delta} = - \mathrm {E} _ {a \sim \pi_ {\varphi}} Q _ {\delta} (s _ {t}, a) + \left(Q _ {\delta} (s _ {t}, a _ {t}) - G _ {t}\right) ^ {2}, \tag {6} +$$ + +where the total cumulative reward $G_{t} = R_{t} + \mathrm{E}_{a\sim \pi_{\varphi}}Q_{\delta}(s_{t + 1},a)$ is calculated by Bellman equation (Bellman, 2013). The actor, critic and path imitation discriminator is jointly optimized by minimizing $L_{REC} = L_{\varphi ,\delta} + L_{\tau}$ + +Beam Search of Recommendation Paths After agent pre-training, we adopt beam search to generate candidate recommendation paths. Sorted by the probability of leading to accurate recommendation, top $N_{p}$ paths are tokenized into a statement containing entity and relation words, which are provided to the conversation module as alignment signals. + +# 4.3 Recommendation-aware Conversation Module + +Encoder The conversation context $C$ , the response $Y$ , the utterance $U$ and the tokenized recommendation paths $\mathcal{P}$ are encoded by the context encoder, the knowledge encoder and the semantic encoder, respectively, based on Bi-RNN. Given the input sequence $X = (x_{1},\ldots ,x_{N})$ , the forward and the backward RNN respectively generate hidden states $h_t^f$ and $h_t^b$ for each $x_{t}$ , which are concatenated to form the overall hidden state $h_t$ : + +$$ +h _ {t} = \left[ h _ {t} ^ {f}; h _ {t} ^ {b} \right] = \left[ \overrightarrow {\mathrm {G R U}} \left(x _ {t}, h _ {t - 1} ^ {f}\right); \overleftarrow {\mathrm {G R U}} \left(x _ {t}, h _ {t + 1} ^ {b}\right) \right], \tag {7} +$$ + +where $[;]$ is the concatenation operation. We denote the hidden states of all time steps as $H = (h_1, h_2, \ldots, h_N)$ , $o = \left[h_N^f; h_1^b\right]$ as the final hidden state. For all input sources, we obtain $H_C$ , $H_Y$ , $H_U$ , $\{H_{\mathcal{P},i}\}_{i=1}^{N_p}$ and $o_C$ , $o_Y$ , $o_U$ , $\{o_{\mathcal{P},i}\}_{i=1}^{N_p}$ . + +Knowledge Imitation To refine the distribution of tokenized recommendation paths of leading to accurate recommendation with proper explanation, knowledge imitation makes the tokenized recommendation paths imitate the human response, which often contains correct recommendation destination without strong explanation. Given the encoded conversation context $o_{C}$ and $o_{P} = \{o_{P,i}\}_{i=1}^{N_{p}}$ of the encoding of tokenized recommendation paths, the $o_{C}$ server as the prior information. We first obtain the prior path weight distribution by the similarity between $o_{C}$ and each path $p_{i}$ : + +$$ +P \left(p _ {i} \mid C\right) = \frac {\exp \left(o _ {\mathcal {P} , i} \cdot z \left(W _ {K , C} o _ {C}\right)\right)}{\sum_ {j = 1} ^ {N _ {p}} \exp \left(o _ {\mathcal {P} , j} \cdot z \left(W _ {K , C} o _ {C}\right)\right)}, \tag {8} +$$ + +where $z(\cdot)$ is the tanh function, $W_{K,C}$ is learnable. + +Since the recommendation paths contain the predicated interest in response, the prior information is insufficient to calculate the recommendation paths distribution. Therefore, the imitation also involves the human response as posterior information to obtain the posterior distribution of the paths. + +$$ +P \left(p _ {i} \mid Y\right) = \frac {\exp \left(o _ {\mathcal {P} , i} \cdot z \left(W _ {K , Y} o _ {Y}\right)\right)}{\sum_ {j = 1} ^ {N _ {p}} \exp \left(o _ {\mathcal {P} , j} \cdot z \left(W _ {K , Y} o _ {Y}\right)\right)}, \tag {9} +$$ + +where $W_{K,Y}$ is learnable. We use Kullback-Leibler divergence loss $L_{KL}$ to make $P(p_i\mid C)$ imitates $P(p_i\mid Y)$ , and BOW Loss $L_{BOW}$ to enforce the relevancy between recommendation paths distribution and response (Lian et al., 2019). + +Semantic Imitation To refine the semantic encoding of tokenized recommendation paths, semantic imitation makes tokenized recommendation paths imitates the golden utterance statement of correct recommendation and coherent explanation. Given the encoded conversation context $o_{C}$ and the hidden state $o_{\mathcal{P}} = \{o_{\mathcal{P},i}\}_{i = 1}^{N_p}$ of the tokenized recommendation paths encoding, we apply attention (Bahdanau et al., 2015) to the $o_{\mathcal{P}}$ to obtain the context-based path aggregation representation $o_{S,\mathcal{P}} = \text{Attention}(o_{\mathcal{P}},z(W_{S,\mathcal{P}}o_{C}))$ , where $z(\cdot)$ is the tanh function, $W_{S,\mathcal{P}}$ is the parameter matrix. + +To make $o_{S,\mathcal{P}}$ and the semantic $o_U$ of the encoded golden interest shift path behave similarly, we adopt the Mutual Information Maximization (Cao et al., 2021), which forces the learned context-based aggregation representation to equip with the semantic of the golden utterance statement via maximizing the mutual information between $o_{S,\mathcal{P}}$ and $o_U$ . We use binary cross-entropy loss as the mutual information estimator. The learning objective is: + +$$ +\begin{array}{l} L _ {B C E} = - \frac {1}{| \mathbb {P} | + | \mathbb {N} |} \big (\sum_ {\mathbb {P}} \log \mathcal {I} _ {S, \phi} (o _ {S, \mathcal {P}}, o _ {U}) + \\ \sum_ {\mathbb {N}} \log \left(1 - \mathcal {I} _ {S, \phi} \left(\widetilde {\mathcal {O} _ {S , \mathcal {P}}} , o _ {U}\right)\right), \tag {10} \\ \end{array} +$$ + +where $\mathbb{P}$ and $\mathbb{N}$ represent the set of positive and negative samples, respectively. $\widetilde{o_{S,\mathcal{P}}}$ is the random sampled negative sample's encoding. $\mathcal{I}_{S,\phi}$ is a semantic imitation discriminator to score $o_{S,\mathcal{P}}$ and $o_U$ via a bilinear mapping function: + +$$ +\mathcal {I} _ {S, \phi} \left(o _ {S, \mathcal {P}}, o _ {U}\right) = \sigma \left(\left(o _ {S, \mathcal {P}}\right) ^ {T} W _ {S, \phi} o _ {U}\right), \tag {11} +$$ + +where $\sigma (\cdot)$ is the sigmoid function, $W_{S,\phi}$ is the parameter matrix. For the response generation, an MLP layer merges the learned semantic $o_S$ into the hidden state $o_C$ of the conversation context as the initial hidden state of the decoder, where $o_S = o_U$ if $U$ is available, otherwise $o_S$ is the learned $o_{S,\mathcal{P}}$ . In the inference stage, $U$ is unknown. + +Path-aware Response Generation We employ GRU to integrate context and path information to generate a response. Given the decoder state $h_t$ , the output states $H_C$ and $\{H_{\mathcal{P},i}\}_{i=1}^{N_p}$ of the context encoder and knowledge encoder, we apply attention to the $H_C$ at decoder step t: $d_t^C, v_t^C = \text{Attention}(H_C, h_t)$ , where $d_t^C$ is the attention distribution of each token in the context $C$ , $v_t^C$ is the aggregation vector of $C$ . $\left\{d_t^{\mathcal{P},i}\right\}_{i=1}^{N_p}$ and $\left\{v_t^{\mathcal{P},i}\right\}_{i=1}^{N_p}$ are obtained for $H_{\mathcal{P},i}$ of each path $p_i$ . + +We obtain the overall path representation $v_{t}^{\mathcal{P}} = \sum_{i=1}^{N_{p}} \mu_{\mathcal{P},i} \cdot v_{t}^{\mathcal{P},i}$ , where $\mu_{\mathcal{P},i} = P(p_{i} \mid Y)$ in training, and $\mu_{\mathcal{P},i} = P(p_{i} \mid C)$ in inference. To reduce the impact of inaccurate recommended paths, we design a fusion gate $g_{t}$ to determine the contribution of $v_{t}^{\mathcal{P}}$ to the fusion information $v_{t}$ : + +$$ +g _ {t} = \sigma \left(W _ {g} \left[ d _ {t} ^ {C}; v _ {t} ^ {\mathcal {P}} \right]\right), v _ {t} = g _ {t} d _ {t} ^ {C} + (1 - g _ {t}) v _ {t} ^ {\mathcal {P}}, \tag {12} +$$ + +where $W_{g}$ is learnable. Hence, the decoder updates its state as: $h_{t + 1} = \mathrm{GRU}(h_t,[y_t;v_t])$ , where $y_{t}$ is the embedding of predicted word at time step t. $h_t$ and $v_{t}$ are also used to obtain the generation probability $P_{\text{vocab}}(w_t)$ over the vocabulary at time step t, formalized as $P_{\text{vocab}}(w_t) = \rho ([h_t;v_t])$ , where $\rho (\cdot)$ is a two-layer MLP with a softmax function. Furthermore, we adopt a pointer copy mechanism to copy tokens from the tokenized recommendation paths $\mathcal{P}$ , which ensures that the logical knowledge in the path can be copied to enrich the explanation in the response. At time step t, the probability of copying tokens from $\mathcal{P}$ is a weighted sum of copying tokens from all paths over the path distribution: + +$$ +P _ {\mathcal {P}} \left(w _ {t}\right) = \sum_ {i = 1} ^ {N _ {p}} \mu_ {\mathcal {P}, i} \cdot \sum_ {\left\{j: p _ {i} ^ {j} = w _ {t} \right\}} d _ {t, j} ^ {\mathcal {P}, i}, \tag {13} +$$ + +where $p_i^j$ is the token in the path $p_i$ , $d_{t,j}^{P,i}$ is the attention weights of the $j^{th}$ token in $p_i$ . We use a pointer generation probability $\xi_t^{gen}$ (See et al., 2017) to obtain the overall probability distribution: + +$$ +P \left(w _ {t}\right) = \xi_ {t} ^ {\text {g e n}} P _ {\text {v o c a b}} \left(w _ {t}\right) + \left(1 - \xi_ {t} ^ {\text {g e n}}\right) P _ {\mathcal {P}} \left(w _ {t}\right), \tag {14} +$$ + +where $\xi_t^{gen} = \sigma(W_{gen}[y_{t-1}; h_t; v_t])$ , $W_{gen}$ is learnable. When training the conversation module, we use additional NLL Loss to quantify the difference between the golden and generated response: + +$$ +L _ {N L L} = - \frac {1}{| Y |} \sum_ {t = 1} ^ {| Y |} \log \left(P \left(y _ {t} \mid y _ {< t}, C, \mathcal {P}\right)\right). \tag {15} +$$ + +In summary, the conversation module is jointly optimized by minimizing the joint loss $L_{GEN} = L_{KL} + L_{BOW} + L_{BCE} + L_{NLL}$ . + +# 4.4 Bidirectional Improvement of Two Modules + +After training recommendation and conversation modules, we conduct a bidirectional joint training. The conversation module provides the recommendation module with the rewards $R_{k,t}$ and + +$R_{s,t}$ from the knowledge imitation and semantic imitation, respectively. $R_{k,t}$ is the knowledge consistency between recommendation paths and human response. If path $p\in \mathcal{P}$ , $R_{k,t} = \log (\mu_{\mathcal{P},i}) + \log (1 - \mu_{\mathcal{P},i})$ , where $i$ is the index of $p$ in $\mathcal{P}$ , otherwise $R_{k,t} = 0$ . $R_{s,t}$ is the semantic similarity between the path segment generated at step $t$ and the golden utterance. $R_{s,t} = \log (\mathcal{I}_{S,\phi}(o_{p,t},o_U)) + \log (1 - \mathcal{I}_{S,\phi}(o_{p,t},o_U))$ , where $o_{p,t}$ is the hidden state of the tokenized path segment encoded by the semantic encoder. The aggregation reward is $R_{t} = \alpha R_{p,t} + \beta R_{k,t} + \gamma R_{s,t} + (1 - \alpha -\beta -\gamma)R_{T,t}$ , where $\alpha +\beta +\gamma \in [0,1]$ . If the path is shorter than the maximum reasoning length, $\beta = 0$ . + +The recommendation module provides the conversation module with optimized recommendation paths to guide the response generation. In this way, the bidirectional joint training optimizes the alignment between recommendation and conversation and promotes the overall performance of CRS. + +# 5 Experiments + +# 5.1 Experiment Setup + +Dataset We did experiments on OpenDialKG (Moon et al., 2019), a dialog $\leftrightarrow$ KG parallel corpus for CRS, where the mentions of KG entities and their factual connections in a dialog are annotated. The user interest shift path is extracted from context-response pairs, where its start entity is in the context, and destination entity is in the response. Each path is tokenized into an utterance statement that weaves together the entities and relations mentioned in the conversation. More details on data and experiments are in Appendix A and B. + +Models for Comparison (1) TextCNN (Kim, 2014) is CNN-based recommendation model. (2) Trans (Vaswani et al., 2017) is a Transformer-based response generation model. (3) KBRD (Chen et al., 2019) is a knowledge-based CRS that enhances user preference with a KG. (4) KGSF (Zhou et al., 2020) is a KG-based CRS aligning the semantic space of two KGs. (5) RevCore (Lu et al., 2021) is a review-enhanced CRS. (6) CRFR (Zhou et al., 2021) is a fragments reasoning-based CRS. (7) $C^2$ -CRS (Zhou et al., 2022b) is a contrastive learning-based CRS. (8) We design ACRG as a variant of DICR and removes all imitation components and rich interaction between two modules from DICR. + +Implementation Details We implemented our model with Pytorch. In the recommendation mod + +
ModelsRecall@1Recall@10Recall@25
TextCNN0.0590.1770.235
KBRD0.1040.4070.490
KGSF0.1190.4360.523
RevCore0.1240.4320.516
CRFR0.1300.4580.543
C2-CRS0.1120.4650.541
ACRG0.1850.4900.629
DICR0.211*0.511*0.643*
w/o PI0.2030.4970.635
w/o KI0.2010.4940.632
w/o SI0.2000.5000.631
+ +ule, the history length $H = 1$ and the maximum length of the reasoning path is 3. The maximum action space is 250. We trained the KG with the embedding size 128. The rewards weights are $\alpha = \gamma = 0.006$ , $\beta = 0.001$ . The conversation module receives $N_{p} = 10$ recommendation paths. All encoders and decoders have 2-layers with 800 hidden units for each layer. The word embedding is initialized with word2vec and size 300. We used the Adam optimizer (Kingma and Ba, 2015), the batch size is 32 and the learning rate is 0.0001. We trained our model with four steps. We first trained the model to minimize the $L_{REC}$ loss, then minimized the BOW loss and BCE loss for pre-training knowledge imitation and semantic imitation components, and then minimized the $L_{GEN}$ loss. Finally, we jointly trained the whole model. + +# 5.2 Evaluation on Recommendation + +In recommendation evaluation, we use Recall@K $(\mathrm{K} = 1,10,25)$ indicating whether the top-k predicted items include the golden recommendation item. + +Overall Evaluation As in Table 1, DICR outperforms all the baselines significantly, benefiting from using rich dual imitation signals as the rewards for the recommendation agent. Compared with the best results of CRFR and $C^2$ -CRS, DICR achieves $62.3\%$ , $9.9\%$ , and $18.4\%$ improvements on three metrics. Despite substantial progress achieved by extra knowledge (i.e., KBRD, KGSF, RevCore, $C^2$ -CRS) and fragments reasoning (i.e., CRFR), their performance is still inferior to ACRG indicating that black-box preference representation is a sub-optimal interest expression scheme. + +Ablation Study of Dual Imitation We separately remove path imitation, knowledge imitation + +Table 1: Overall evaluation on recommendation. w/o refers to removing the component from DICR. " $*$ " indicates the statistical significance for $p < {0.001}$ compared with the best baseline (t-test with p-value $< {0.001}$ ). + +
ModelsBleu-1Bleu-2Dist-1Dist-2F1
Trans0.3880.3090.0270.1030.050
KBRD0.4080.3240.0550.1620.108
KGSF0.4160.3300.0620.2030.123
RevCore0.4090.3230.0570.1950.112
CRFR0.4210.3340.0640.2080.135
C²-CRS0.4170.3310.0650.2090.145
ACRG0.4220.3260.0540.1610.270
DICR0.478*0.366*0.0590.1830.319*
w/o PI0.4670.3570.0570.1770.308
w/o KI0.4640.3570.0550.1710.293
w/o SI0.4660.3580.0570.1750.302
w/o FG0.4580.3520.0560.1680.305
w/o BI0.4390.3380.0580.1740.288
+ +Table 2: Overall evaluation on conversation. w/o and \* have the same meaning as those in Table 1. + +and semantic imitation from DICR to examine their contribution, called w/o PI, w/o KI and w/o SI. In Table 1, all imitation components contribute to the recommendation performance, with the rewards within module (PI) or across modules (KI and SI) for the reasoning policy learning. On the one hand, the conversational rewards (i.e., from KI and SI) are used as the alignment signals and guide the recommender to learn user interest shift policies. On the other hand, the dual-reward-reinforced (i.e., from PI, KI and SI) recommendation paths serve as the alignment signals in turn improve the conversation by promoting the positive cyclic learning of bidirectional interaction with dual imitation. + +# 5.3 Evaluation on Conversation + +Overall Evaluation To evaluate the overall performance of the conversation, we use BLEU-1/2 and Distinct-1/2 (Dist-1/2) to evaluate the quality and diversity of the generated responses. F1 is the F1-score measuring how well the responses contain golden knowledge. In Table 2, DICR outperforms all baselines significantly in most metrics. DICR achieves $13.3\%$ , $9.6\%$ improvements on Bleu-1/2 compared to the best baselines, which supports the effectiveness of our method, i.e., aligning the recommendation reasoning and conversation process. DICR achieves the best result on F1, demonstrating that explicit recommendation paths (i.e., DICR, ACRG) are superior to implicit embedding semantics (i.e., baselines except for ACRG) in guiding the generation of knowledge-rich responses. + +Hit and Explainability Accurate recommendations with coherent explanations is one of our main contributions. We propose "Hit" to measure the recommendation success rate in conversation. Hit is the hit rate at which recommended items in the + +
ModelsHitG-InterG-InnerP-InterP-Inner
KBRD0.2500.6390.194--
KGSF0.2610.6620.201--
RevCore0.2450.6700.222--
CRFR0.2930.6910.2470.4850.134
C²-CRS0.3100.6950.319--
ACRG0.3180.5480.2880.5270.279
DICR0.426*0.720*0.348*0.689*0.330*
w/o PI0.4050.6850.3200.6570.309
w/o KI0.3860.6800.3420.6510.327
w/o SI0.4030.6870.3270.6630.319
w/o FG0.3850.6670.2630.6410.256
w/o BI0.3540.5920.2980.5700.287
+ +golden response are included in the generated response. The explainability of the response is evaluated by logically linked entity pairs which are necessary for coherent explanation. Specifically, "Inter" counts the entity links between context and response, which evaluates contextually coherent explanation across context and response. "Inner" counts the entity links within the response, which evaluates self-consistent explanation in response. "G" counts the entity links that can be matched in KG and thus evaluates the explanations with global KG knowledge. "P" counts the entity links that can be matched in the recommendation paths in KG and thus evaluates how well the recommendation paths support the explanation generation. Finally, we have four combined indicators, "G-Inter, G-Inner, P-Inner, P-Inner", e.g., "G-Inter" evaluates the coherent explanation according to KG knowledge. + +In Table 3, DICR outperforms all baselines and obtains a significant improvement on Hit and explainability. First, DICR achieves $34\%$ improvements on Hit compared to the best ACRG, which verifies the effectiveness of the dual imitation mechanism for aligning the consistent behavior of the recommendation reasoning and conversation process. Second, DICR obtains $3.6\%$ and $9.1\%$ gains on G-Inter and G-Inner compared to the best $\mathrm{C}^2$ -CRS, which shows that DICR prefers to generate logically coherent explanations within responses and across context and response. Third, DICR improves $30.7\%$ and $18.3\%$ on P-Inter and P-Inner compared to the best ACRG, which indicates that the conversation side of the dual imitation (i.e., KI and SI) can effectively identify and integrate recommendation paths for response generation. + +Human Evaluation In human evaluation, we randomly sampled 200 contexts. Each context is associated with eight responses from eight comparison + +Table 3: Evaluation on hit and explainability. w/o and \* have the same meaning as those in Table 1. + +
ModelsFlu.Coher.Inform.Explain.
Trans1.6441.3021.0020.750
KBRD1.6881.3281.2350.875
KGSF1.6651.3511.2410.895
RevCore1.6921.3831.2480.912
CRFR1.6961.3861.2620.940
C²-CRS1.6731.3421.2340.900
ACRG1.7901.6081.0450.950
DICR1.815*1.633*1.300*1.058*
kappa0.5660.5180.5210.490
+ +Table 4: Human evaluation on conversation. "Flu.", "Coher.", "Inform.", "Explain." denote fluency, coherence, informativeness and explainability. The agreement ratio kappa $\in$ [0.41, 0.6] denotes the moderate agreement. \* indicates t-test with p-value $< 0.05$ + +models, respectively. Six well-educated annotators evaluate each response with four scores: fluency, coherence, informativeness, and explainability. Fluency and coherence evaluate the language quality of responses. Informativeness evaluates whether the response incorporates rich knowledge. Explainability evaluates whether the response explains the reason for the recommended item. The scores are set to $\{0,1,2\}$ . The model name is masked during the evaluation for a fair comparison. The Fleiss' kappa (Fleiss, 1971) measures the agreement among the annotators. As the results in Table 4, the superior of DICR on all indicators support the observations from automatic evaluations. + +Ablation Study In Tables 2 and 3: (1) We separately remove the path imitation, knowledge imitation, and semantic imitation to examine their contribution, namely w/o PI, w/o KI, and w/o SI, respectively. In the results, the path imitation mainly benefits the inner coherent of explainability ("G/P-Inner"), which verifies its designed advantage to indirectly guide the explanation logic by accurate recommendation paths. The knowledge imitation mainly benefits recommendation hit ("Hit"), inter coherent of recommendation ("G/P-Inter") and distinct of responses ("Dist-1/2"), which verifies its designed advantage to refine the distribution of recommendation paths for accurate recommendation and to encourage diverse explanations in response. The semantic imitation also mainly benefits inner coherent of explainability and is more important to inter coherent of recommendation than path imitation, which verifies its designed advantages to improve the semantic of response by promoting inner and inter coherent. (2) We remove the fusion gate, namely w/o FG. The results show that the dynamic information fusion mechanism achieves an + +![](images/42b67c767ef4cc40529585cc7e0576277d1c174eafebe7a610e68324f71529d6.jpg) +Figure 3: Cases generated by different models, indicting multi-hop entities and correct/incorrect relations. + +impressive enhancement. (3) We remove the bidirectional improvement in the training, namely w/o BI. The results indicate that the tightly information interaction between recommendation and conversation with the alignment signals of dual imitation as the bridge is crucial to the overall performance. + +# 5.4 Case Study + +In Figure 3, two cases from eight models are selected, among which DICR has two advantages: (1) The items recommended by DICR is more accurate and likely to have explicit multi-hop relation with the items mentioned by the user, being consistent to "Hit" and "G/P-Inter" in Table 3, e.g., in Dialog-2, "Higher Ground" in response and "Tower Heist" in context share the actor "Nina Arianda." This is evidence of improving recommendation by tracking user interest shift in conversation, which mainly benefits from the path imitation and the knowledge imitation, as verified by ablation study in Table 3; (2) DICR naturally tells the items' relation as an explanation, being consistent to "G/P-Inner" in Table 3 and "Explain." In Table 4, e.g., in Dialog-1, the director "Martin Campbell", the movie "GoldenEye" and the genre "thriller" derive from a recommendation path with coherent relation. This is evidence of improving conversation by involving recommendation path as an explanation, which mainly benefits from the semantic imitation, as verified in Table 3. + +# 6 Conclusions + +We propose DICR, which adopts the dual imitation to align CRS's recommendation and conversation + +behavior explicitly. Using recommendation paths and conversational rewards as alignment signals for tight interaction between recommendation and conversation, DICR achieves accurate recommendations and coherent explanations in generated responses. The effectiveness of DICR is verified by designed novel explainability evaluations together with human and existing automatic metrics. + +# Limitations + +We discuss two main limitations of this work which can be further studied in future work. The first one is the reliance of explicit knowledge in knowledge graph. Although using knowledge graph is a common advantage of most current CRS studies, and explicit relations between entities leads to effective and reliable reasoning for recommendation, there are still a large amount of implicit knowledge in unstructured resources which cannot be extracted as explicit triplet, e.g., the multidimensional similarity between entities, but can be further extra supplement to dialog context. + +The second one is the task of next-turn recommendation. As the main contribution of this work, although the modeling of user interest shift significantly improve the performance of making recommendation in next-turn response, the user interest shift modeling can also naturally help us to guide the user interests towards proper recommendation through smooth and persuasive multi-turn conversation with users. To address this limitation, in the future, we will extend the idea to align the KG-based reasoning and conversation process towards + +long-term global goal instead of local target. + +# Ethics Consideration + +All models in this paper are trained on public corpus. The OpenDialKG (Moon et al., 2019) dataset do not contain personal information and unethical language. We also ensure the anonymization of the human evaluation. We believe that this work honors the ethical code of EMNLP. + +# Acknowledgements + +This work was supported by National Natural Science Foundation of China (62272340, 61876128, 61876129, 62276187, 61976154, 61402323), State Key Laboratory of Communication Content Cognition (Grant No.A32003). + +# References + +Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. +R Bellman. 2013. Dynamic programming, courier corporation. New York, NY, 707. +Antoine Bordes, Nicolas Usunier, Alberto García-Durán, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States, pages 2787-2795. +Jiangxia Cao, Xixun Lin, Shu Guo, Luchen Liu, Tingwen Liu, and Bin Wang. 2021. Bipartite graph embedding via mutual information maximization. In WSDM '21, The Fourteenth ACM International Conference on Web Search and Data Mining, Virtual Event, Israel, March 8-12, 2021, pages 635-643. ACM. +Qibin Chen, Junyang Lin, Yichang Zhang, Ming Ding, Yukuo Cen, Hongxia Yang, and Jie Tang. 2019. Towards knowledge-based recommender dialog system. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 1803-1813. Association for Computational Linguistics. +Yang Deng, Yaliang Li, Fei Sun, Bolin Ding, and Wai Lam. 2021. Unified conversational recommendation policy learning via graph-based reinforcement learning. In SIGIR '21: The 44th International ACM + +SIGIR Conference on Research and Development in Information Retrieval, Virtual Event, Canada, July 11-15, 2021, pages 1431-1441. ACM. +Joseph L Fleiss. 1971. Measuring nominal scale agreement among many raters. Psychological bulletin, 76(5):378. +Chongming Gao, Wenqiang Lei, Xiangnan He, Maarten de Rijke, and Tat-Seng Chua. 2021. Advances and challenges in conversational recommender systems: A survey. AI Open, 2:100-126. +Shirley Anugrah Hayati, Dongyeop Kang, Qingxiaoyang Zhu, Weiyan Shi, and Zhou Yu. 2020. INSPIRED: toward sociable recommendation dialog systems. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 8142-8152. Association for Computational Linguistics. +Jonathan Ho and Stefano Ermon. 2016. Generative adversarial imitation learning. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pages 4565-4573. +Dietmar Jannach, Ahtsham Manzoor, Wanling Cai, and Li Chen. 2021. A survey on conversational recommender systems. ACM Comput. Surv., 54(5):105:1-105:36. +Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar. A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1746-1751. ACL. +Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. +Ivica Kostric, Krisztian Balog, and Filip Radlinski. 2021. Soliciting user preferences in conversational recommender systems via usage-related questions. In RecSys '21: Fifteenth ACM Conference on Recommender Systems, Amsterdam, The Netherlands, 27 September 2021 - 1 October 2021, pages 724-729. ACM. +Wenqiang Lei, Gangyi Zhang, Xiangnan He, Yisong Miao, Xiang Wang, Liang Chen, and Tat-Seng Chua. 2020. Interactive path reasoning on graph for conversational recommendation. In KDD '20: The 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Virtual Event, CA, USA, August 23-27, 2020, pages 2073-2083. ACM. +Raymond Li, Samira Ebrahimi Kahou, Hannes Schulz, Vincent Michalski, Laurent Charlin, and Chris Pal. + +2018. Towards deep conversational recommendations. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montreal, Canada, pages 9748-9758. +Shuokai Li, Ruobing Xie, Yongchun Zhu, Xiang Ao, Fuzhen Zhuang, and Qing He. 2022. User-centric conversational recommendation with multi-aspect user modeling. In Proceedings of the 45nd International ACM SIGIR Conference on Research and Development in Information Retrieval. Association for Computing Machinery. +Rongzhong Lian, Min Xie, Fan Wang, Jinhua Peng, and Hua Wu. 2019. Learning to select knowledge for response generation in dialog systems. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, pages 5081-5087. ijcai.org. +Zujie Liang, Huang Hu, Can Xu, Jian Miao, Yingying He, Yining Chen, Xiubo Geng, Fan Liang, and Daxin Jiang. 2021. Learning neural templates for recommender dialogue system. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 7821-7833. Association for Computational Linguistics. +Zeming Liu, Haifeng Wang, Zheng-Yu Niu, Hua Wu, Wanxiang Che, and Ting Liu. 2020. Towards conversational recommendation over multi-type dialogs. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 1036-1049. Association for Computational Linguistics. +Yu Lu, Junwei Bao, Yan Song, Zichen Ma, Shuguang Cui, Youzheng Wu, and Xiaodong He. 2021. Revcore: Review-augmented conversational recommendation. In Findings of the Association for Computational Linguistics: ACL/IJCNLP 2021, Online Event, August 1-6, 2021, volume ACL/IJCNLP 2021 of Findings of ACL, pages 1161-1173. Association for Computational Linguistics. +Wenchang Ma, Ryuichi Takanobu, and Minlie Huang. 2021. Cr-walker: Tree-structured graph reasoning and dialog acts for conversational recommendation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 1839-1851. Association for Computational Linguistics. +Seungwhan Moon, Pararth Shah, Anuj Kumar, and Rajen Subba. 2019. Opendialkg: Explainable conversational reasoning with attention-based walks over knowledge graphs. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Papers, pages 845-854. Association for Computational Linguistics. + +Xuhui Ren, Hongzhi Yin, Tong Chen, Hao Wang, Zi Huang, and Kai Zheng. 2021. Learning to ask appropriate questions in conversational recommendation. In SIGIR '21: The 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, Virtual Event, Canada, July 11-15, 2021, pages 808-817. ACM. +Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer-generator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 1073-1083. Association for Computational Linguistics. +Richard S Sutton and Andrew G Barto. 2018. Reinforcement learning: An introduction. MIT press. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998-6008. +Xiang Wang, Dingxian Wang, Canran Xu, Xiangnan He, Yixin Cao, and Tat-Seng Chua. 2019. Explainable reasoning over knowledge graphs for recommendation. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, pages 5329-5336. AAAI Press. +Yikun Xian, Zuohui Fu, S. Muthukrishnan, Gerard de Melo, and Yongfeng Zhang. 2019. Reinforcement knowledge graph reasoning for explainable recommendation. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2019, Paris, France, July 21-25, 2019, pages 285-294. ACM. +Zhitong Yang, Bo Wang, Jinfeng Zhou, Yue Tan, Dongming Zhao, Kun Huang, Ruifang He, and Yuexian Hou. 2022. TopKG: Target-oriented dialog via global planning on knowledge graph. In Proceedings of the 29th International Conference on Computational Linguistics, pages 745-755, Gyeongju, Republic of Korea. International Committee on Computational Linguistics. +Jun Zhang, Yan Yang, Chencai Chen, Liang He, and Zhou Yu. 2021. KERS: A knowledge-enhanced framework for recommendation dialog systems with multiple subgoals. In Findings of the Association for Computational Linguistics: EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 16-20 November, 2021, pages 1092-1101. Association for Computational Linguistics. +Tong Zhang, Yong Liu, Boyang Li, Peixiang Zhong, Chen Zhang, Hao Wang, and Chunyan Miao. 2022a. + +Toward knowledge-enriched conversational recommendation systems. In Proceedings of the 4th Workshop on NLP for Conversational AI, ConvAI@ACL 2022, Dublin, Ireland, May 27, 2022, pages 212-217. Association for Computational Linguistics. +Yiming Zhang, Lingfei Wu, Qi Shen, Yitong Pang, Zhihua Wei, Fangli Xu, Bo Long, and Jian Pei. 2022b. Multiple choice questions based multi-interest policy learning for conversational recommendation. In WWW '22: The ACM Web Conference 2022, Virtual Event, Lyon, France, April 25 - 29, 2022, pages 2153-2162. ACM. +Kangzhi Zhao, Xiting Wang, Yuren Zhang, Li Zhao, Zheng Liu, Chunxiao Xing, and Xing Xie. 2020. Leveraging demonstrations for reinforcement recommendation reasoning over knowledge graphs. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, SIGIR 2020, Virtual Event, China, July 25-30, 2020, pages 239-248. ACM. +Jinfeng Zhou, Bo Wang, Ruifang He, and Yuexian Hou. 2021. CRFR: improving conversational recommender systems via flexible fragments reasoning on knowledge graphs. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 4324-4334. Association for Computational Linguistics. +Jinfeng Zhou, Bo Wang, Zhitong Yang, Dongming Zhao, Kun Huang, Ruifang He, and Yuexian Hou. 2022a. CR-GIS: Improving conversational recommendation via goal-aware interest sequence modeling. In Proceedings of the 29th International Conference on Computational Linguistics, pages 400-411, Gyeongju, Republic of Korea. International Committee on Computational Linguistics. +Kun Zhou, Wayne Xin Zhao, Shuqing Bian, Yuanhang Zhou, Ji-Rong Wen, and Jingsong Yu. 2020. Improving conversational recommender systems via knowledge graph based semantic fusion. In KDD '20: The 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Virtual Event, CA, USA, August 23-27, 2020, pages 1006-1014. ACM. +Yuanhang Zhou, Kun Zhou, Wayne Xin Zhao, Cheng Wang, Peng Jiang, and He Hu. 2022b. C2-CRS: coarse-to-fine contrastive learning for conversational recommender system. CoRR, abs/2201.02732. + +![](images/2e26d425986791ac1c0727dcfe46f865f4ec62e1378addbaec384348d69af6c1.jpg) +(a) Hit + +![](images/cd2746e31f559fa2b416e31aaa32241a399d3c30b0f5ccb912c49f6be01072ec.jpg) +(b) G-Inter/Inner +Figure 4: The influence of the number of recommendation paths on Hit and Explainability. As the number of recommendation paths increases, DICR improves on Hit and Explainability metrics and outperforms the best baseline on most cases. + +![](images/405b4b85bb0e117d8a71d49035ffcf38682267ca27dd07c8759a305dcaa6ac5d.jpg) +(c) P-Inter/Inner + +
Corpus Info.#DomainMovie,Book
#Dialogues15,673
#Turns91,209
#Split Ratio7:1.5:1.5
KG Info.#Entities100,813
#Relations1,358
#Triplets1,190,658
+ +Table 5: Statistics of our datasets after preprocessing. + +# A Dataset + +The statistics of OpenDialKG after preprocessing are in Table 5. + +We did not employ other CRS datasets. The reason is that compared with OpenDialKG, in other datasets like REDIAL (Li et al., 2018), dialogs mention the recommended items without rich related information and tend to mention only movie names rather than an in-depth discussion on the movie preference, which is considered as the recommended explanation in this paper. As reported in the CRFR (Zhou et al., 2021), OpenDialKG's advantages improve the performance of CRFR and the compared CRS baselines in our experiments. + +# B Analysis of the Number of Recommendation Paths + +We analyze the influence of the number of recommendation paths on Hit and explainability, and Figure 4 presents the results. + +First, as shown in Figure 4(a), with the increase of the number of recommendation paths which are used as the alignment signals for the recommendation side of the dual imitation, the Hit scores slightly improve in fluctuations. This indicates that the conversation side of the dual imitation can effectively identify the golden recommendation paths and prompt the conversation process to align the + +recommendation reasoning. + +Second, in Figure 4(b) and 4(c), G-Inter/Inner and P-Inter/Inner both improve distinctly as the number of paths increases. This improvement is attributed to the advantage that knowledge imitation and semantic imitation endow the DICR with discerning and integrating the coherent knowledge in the recommendation paths as the recommended explanation in response. This advantage aligns the recommendation reasoning to explanation generation, which helps the model refine the discerned knowledge and display them in the generated response. \ No newline at end of file diff --git a/aligningrecommendationandconversationviadualimitation/images.zip b/aligningrecommendationandconversationviadualimitation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..6480b6127e525f403bf9f3c3d4166fa45601b13e --- /dev/null +++ b/aligningrecommendationandconversationviadualimitation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f4e3affe4788efa688579c1049c95d1a7daac1a13d47ba19828ff657f5a67716 +size 632626 diff --git a/aligningrecommendationandconversationviadualimitation/layout.json b/aligningrecommendationandconversationviadualimitation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..a349bf26c04f972abbb97105d603dece9e69204d --- /dev/null +++ b/aligningrecommendationandconversationviadualimitation/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3030d5dea269d437157556a6560fa9373e3a3e560b54bda6c2df080e75c549d1 +size 565471 diff --git a/alocalizedgeometricmethodtomatchknowledgeinlowdimensionalhyperbolicspace/5658b7cf-e09f-4252-8342-1eec10149f9a_content_list.json b/alocalizedgeometricmethodtomatchknowledgeinlowdimensionalhyperbolicspace/5658b7cf-e09f-4252-8342-1eec10149f9a_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..b473146e979f39d35a23e357e78ade53ae0b1613 --- /dev/null +++ b/alocalizedgeometricmethodtomatchknowledgeinlowdimensionalhyperbolicspace/5658b7cf-e09f-4252-8342-1eec10149f9a_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ee796058ad060f81dee67bde8264c8271fc8f1747334803b2ac57eb3eb7b9386 +size 80794 diff --git a/alocalizedgeometricmethodtomatchknowledgeinlowdimensionalhyperbolicspace/5658b7cf-e09f-4252-8342-1eec10149f9a_model.json b/alocalizedgeometricmethodtomatchknowledgeinlowdimensionalhyperbolicspace/5658b7cf-e09f-4252-8342-1eec10149f9a_model.json new file mode 100644 index 0000000000000000000000000000000000000000..c30fd53a4960f4bccd899a02c5282ba9a27c69bc --- /dev/null +++ b/alocalizedgeometricmethodtomatchknowledgeinlowdimensionalhyperbolicspace/5658b7cf-e09f-4252-8342-1eec10149f9a_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b8ee9bb8bb7a97b58281e10d78a3178b96e10a2ddc48fe944978468777dcb2a2 +size 97614 diff --git a/alocalizedgeometricmethodtomatchknowledgeinlowdimensionalhyperbolicspace/5658b7cf-e09f-4252-8342-1eec10149f9a_origin.pdf b/alocalizedgeometricmethodtomatchknowledgeinlowdimensionalhyperbolicspace/5658b7cf-e09f-4252-8342-1eec10149f9a_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..a57b1b2e79cc38d2ec2cc35ce62094d6a5d2b329 --- /dev/null +++ b/alocalizedgeometricmethodtomatchknowledgeinlowdimensionalhyperbolicspace/5658b7cf-e09f-4252-8342-1eec10149f9a_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:940b0eb73784959cd38050749af855f8461eabd5b73be39fc0262f304b179597 +size 633880 diff --git a/alocalizedgeometricmethodtomatchknowledgeinlowdimensionalhyperbolicspace/full.md b/alocalizedgeometricmethodtomatchknowledgeinlowdimensionalhyperbolicspace/full.md new file mode 100644 index 0000000000000000000000000000000000000000..0b761db6b70e5d1fe0f2ab03c73e5c7be7ab218b --- /dev/null +++ b/alocalizedgeometricmethodtomatchknowledgeinlowdimensionalhyperbolicspace/full.md @@ -0,0 +1,336 @@ +# A Localized Geometric Method to Match Knowledge in Low-dimensional Hyperbolic Space + +Bo Hui + +Auburn University + +bohui@auburn.edu + +Tian Xia + +Auburn University + +tianxia@auburn.edu + +Wei-Shinn Ku + +Auburn University + +weishinn@auburn.edu + +# Abstract + +Matching equivalent entities across Knowledge graphs is a pivotal step for knowledge fusion. Previous approaches usually study the problem in Euclidean space. However, recent works have shown that hyperbolic space has a higher capacity than Euclidean space and hyperbolic embedding can represent the hierarchical structure in a knowledge graph. In this paper, we propose a localized geometric method to find equivalent entities in hyperbolic space. Specifically, we use a hyperbolic neural network to encode the lingual information of entities and the structure of both knowledge graphs into a low-dimensional hyperbolic space. To address the asymmetry of structure on different KGs and the localized nature of relations, we learn an instance-specific geometric mapping function based on rotation to match entity pairs. A contrastive loss function is used to train the model. The experiment verifies the power of low-dimensional hyperbolic space for entity matching and shows that our method outperforms the state of the art by a large margin. + +# 1 Introduction + +Knowledge graph (KG) is knowledge base that uses a graph-structured topology to integrate entities, relations, and metadata. Real-world KGs such as DBpedia, Wikidata, and Yago benefit a variety of downstream applications such as question answering (Cui et al., 2017), and fact checking (Huynh and Papotti, 2019). In general, a KG is constructed from one single knowledge base or built in one single language. Thus it is impractical to reach full coverage of the domain (Zhao et al., 2020). To increase the completeness of the knowledge base, a conventional approach is fusion of multiple KGs. One pivotal step for fusion is to align equivalent entities across different KGs. + +Conventional entity alignment approaches mainly compare symbolic features of entities (Lacoste-Julien et al., 2013) or reason the co + +relations by ontology matching (Jiménez-Ruiz and Grau, 2011). With the prosperity of node embedding (Grover and Leskovec, 2016), recent works favor learning entity embeddings for alignment and compare entities using embedding distance metrics. Existing embedding methods for entity alignment can be classified into three types: attribute-based (Sun et al., 2017), relation-based (Chen et al., 2017; Mao et al., 2020) and graph-based (Wang et al., 2018; Sun et al., 2020b). + +However, these embedding-based works study the problem in Euclidean space, where the embeddings are Euclidean vectors. Recent research has proven that Euclidean space does not provide the most powerful geometrical representations for complex data that exhibit a highly non-Euclidean latent anatomy (Bronstein et al., 2017; Hui and Ku, 2022). To tackle this challenge, a variety of remarkable embedding methods have been developed to represent the data in hyperbolic space. The distinctive feature of hyperbolic spaces enables us to embed hierarchical data while preserving the latent hierarchical structure (Nickel and Kiela, 2017). + +In this paper, we propose to solve the entity alignment problem in hyperbolic space. Since entity alignment is a downstream task for embedding, how to use the hyperbolic embedding to match the entities is a challenge. Furthermore, all operations in neural networks such as vector addition, matrix-vector multiplication, and vector inner product are defined in Euclidean space. Therefore, existing neural network models are not applicable anymore for hyperbolic embeddings. To address these challenges, we use a hyperbolic version of neural networks. Specifically, we utilize a hyperbolic graph neural network model to learn the low-dimensional hyperbolic embeddings for entities on two KGs respectively. Two mapping functions are used to implement the initialization, attention-based aggregation, and reduction of dimension in the model. The pre-trained semantic embeddings are projected + +![](images/49b6a423c0cad90eb35e305d394480d0e4dc83a0b5bf338e1379c06be9c71c72.jpg) +Figure 1: KGs in 2-dimensional Poincaré disk. + +into the hyperbolic space as the entity features. We consider the output of the hyperbolic neural network model as the final low-dimensional hyperbolic embeddings for entities. Then the next task is to map embeddings between two KGs. + +Existing works use a unified global map function to match entities. However, The asymmetry of structure on different KGs makes it difficult to learn a unified relationship for all pairs. The reason behind the varied relations is the heterogeneous nature of data sources for KGs. For example, there are 30,291 inner-relations between 15,000 entities in DBpedia KG. However, there are only 26,638 inner-relations between these entities in Yago KG. As a result, the structure of one DBpedia sub-graph may be the same as its counterpart in Yago KG, where the structure of another sub-graph may be different from its counterpart. To address the localized relations between entities, we propose an instance-based geometric mapping function where local parameters are learned from its embeddings for each entity. In hyperbolic space, more generic/ambiguous nodes (e.g., root in a KG) tend to be placed closer to the origin while moving more specific objects (nodes at low levels) towards the boundary. Figure 1 shows the embeddings of two KGs in a 2-dimensional Poincaré disk. We can see that the nodes at low levels will be closer to the original point while nodes at higher levels are placed towards the boundary. This motivates us to design a novel geometric mapping function based on rotation to match two entities across KGs. Ideally, after the rotation, the entity will overlap with the equivalent entity in the hyperbolic space. Instead of learning a unified rotation function for all entities, we use instance-based rotation functions. As shown in Figure 1, after the rotation of embedding for $e_1$ through $\theta_1$ , $e_1$ will overlap with its corresponding entity $e_1'$ . However, the angle between embeddings of $e_2$ and $e_2'$ is $\theta_2$ , which is + +totally different from $\theta_{1}$ + +To train the model, we minimize the hyperbolic distance after mapping between a pair of aligned entities and push negative samples away from the target one. For each entity, we find the aligned entity by searching for the nearest neighbor in terms of hyperbolic distance. Our novelty over existing works can be summarized as: + +- We solve the entity alignment problem in hyperbolic space instead of Euclidean space to capture hierarchical structures of KGs. +- We propose a hyperbolic geometric mapping function to address the non-linear distance ratio with respect to radius in hyperbolic space. +Instead of using a unified global mapping function for all entities, we learn the localized parameters for each entity to address the asymmetry of structure on different KGs. + +# 2 Related Work + +The majority of existing entity matching methods rely on KG embeddings (Sun et al., 2020c). According to the KG embeddings approaches, these models can be roughly categorized into three groups: relation-based, attributes-based, and graph-based models. Relation-based models mainly employ the translational methods (Bordes et al., 2013) to learn the embedding based on relationship triples. IPTransE (Zhu et al., 2017) is an entity alignment model based on translation. It encodes both entities and relations into a unified low-dimensional semantic space. MTransE (Chen et al., 2017) encodes entities and relations of each KG in a separated embedding space. BootEA (Sun et al., 2018) leverages the bootstrapping idea to iteratively label likely alignment. RSNs (Guo et al., 2019) feeds the relational paths into recurrent neural networks to learn embeddings. To increase the robustness, MultiKE (Zhang et al., 2019) unifies multiple views of entities and embeds entities with several combination strategies. Attributes-based models consider the correlations among attributes of entities. For example, JAPE (Sun et al., 2017) assumes that similar entities should have similar correlated attributes. AttrE (Trisedya et al., 2019) exploits large numbers of attribute triples and models the various types of attribute triples to generate attribute embeddings. Then the embedding shifts two KGs into the same space by computing the similarity. + +With the prosperity of graph neural networks (Kipf and Welling, 2017; Hui et al., 2020; Jiang et al., 2022), many works propose to utilize graph convolutional networks to model the structure of KG. GCNAlign (Wang et al., 2018) trains GCNs to embed entities of each KG into a unified vector space. RDGCN (Wu et al., 2019) further incorporates relation information in KG and captures neighboring structures via dual-Graph convolutional network. AliNet (Sun et al., 2020b) aims to mitigate the non-isomorphism of neighborhood structures in an end-to-end manner and controls the aggregation of both direct and distant neighborhood information using a gating mechanism. RREA (Mao et al., 2020) abstracts existing entity alignment methods into a unified framework and derives two key criteria for an ideal transformation operation. RNM (Zhu et al., 2021) is a relation-aware neighborhood matching model. It utilizes neighborhood matching to enhance the entity alignment and uses an iterative framework to leverage the positive samples and the relation alignment in a semi-supervised manner. Dual-AMN (Mao et al., 2021) uses an encoder to model both intra-graph and cross-graph information. EASY (Ge et al., 2021) removes the labor-intensive pre-processing by fully discovering the name information provided by the entities themselves and jointly fuses the features captured by the names of entities and the structural information of the graph. ActiveEA (Liu et al., 2021) introduces Active Learning to reduce the cost of labeling and annotation. Temporal KG is also studied to match time-aware entities (Xu et al., 2021). HMEA (Guo et al., 2021) utilizes visual information to learn image embeddings. It combines the structure and visual representations in the hyperbolic space to predict alignment results. + +HyperKA (Sun et al., 2020a) also aligns entities across KGs in hyperbolic space. However, HyperKA directly aggregates neighborhood information in hyperbolic space and fails to leverage the power of hyperbolic neural networks (e.g., dimensionality reduction and attention mechanism). Furthermore, it uses a unified linear transformation function where we use a localized geometric method; thus HyperKA ignores the non-linear distance ratio with respect to the radius, the isometry of hyperbolic geometry and the locality of mapping. Lastly, it randomly associates each entity with a vector. Instead, we associate the entity with a pretrained semantic embedding. Besides entity align- + +ment, tensor completion (Harshman et al., 1970; Hui et al., 2022) is another method to increase the completeness of the knowledge base. + +# 3 Preliminaries + +# 3.1 Problem Formulation + +We use $G = (E, R, T)$ to represent a KG, where $E$ and $R$ are the sets of entities and relations in the KG. Let $T$ be the set of triples, each of which is $(e_h, r, e_t)$ , including the head entity $e_h \in E$ , the tail entity $e_t \in E$ and the relation $r$ between $e_h$ and $e_t$ . In the entity alignment problem, we are given two KGs: $G_1 = (E_1, R_1, T_1)$ and $G_2 = (E_2, R_2, T_2)$ . The set of known aligned entity pairs across $G_1$ and $G_2$ is defined as: $S = \{(e_1, e_2) | e_1 \in E_1, e_2 \in E_2\}$ , where $e_1$ and $e_2$ are equivalent to each other. Our goal is to find more 1-to-1 alignments across two KGs $G_1$ and $G_2$ . + +# 3.2 Hyperbolic Geometry + +Here we briefly present some basic knowledge in Hyperbolic geometry. Hyperbolic space is a complete simply connected Riemannian manifold with constant negative curvature. There are five isometric models for hyperbolic space: half-space, Poincaré, jemisphere, Klein, and 'Loid. In this paper, we use the $d$ -dimensional Poincaré ball model which is most popular in machine learning: + +$$ +B ^ {d, c} = \left\{\mathbf {x} \in R ^ {d}: \left\| \mathbf {x} \right\| ^ {2} < \frac {1}{c} \right\}, \tag {1} +$$ + +where $-c(c > 0)$ is the negative curvature. Different from Euclidean addition of two vectors, the Möbius addition of $\mathbf{x}$ and $\mathbf{y}$ in hyperbolic space $B^{d,c}$ is defined as: + +$$ +\mathbf {x} \oplus_ {c} \mathbf {y} = \frac {(1 + 2 c \langle \mathbf {x} , \mathbf {y} \rangle + c \| \mathbf {y} \| ^ {2}) \mathbf {x} + (1 - c \| \mathbf {x} \| ^ {2}) \mathbf {y}}{1 + 2 c \langle \mathbf {x} , \mathbf {y} \rangle + c ^ {2} \| \mathbf {x} \| ^ {2} \| \mathbf {y} \| 2}. \tag {2} +$$ + +Then the hyperbolic distance between $\mathbf{x}$ and $\mathbf{y}$ in $B^{d,c}$ in the manifold is given by: + +$$ +d _ {c} (\mathbf {x}, \mathbf {y}) = = (1 / \sqrt {c}) a r c o s h (- c \langle \mathbf {x}, \mathbf {y} \rangle_ {\mathcal {M}}), \tag {3} +$$ + +where $\langle \cdot ,\cdot \rangle_{\mathcal{M}}$ denotes the Minkowski inner product. + +# 3.3 Hyperbolic KG Embedding + +A distinctive property of hyperbolic space is that the circle circumference and disc area grow exponentially with respect to radius, which allows hyperbolic embeddings to represent hierarchical structures. Specifically, given three points: the origin $\mathbf{o}$ , $\mathbf{x}$ and $\mathbf{y}$ with $||\mathbf{x}|| = ||\mathbf{y}|| = r(x \neq y)$ , we depict the hyperbolic distance ratio $\frac{d_c(\mathbf{x},\mathbf{y})}{d_c(\mathbf{x},\mathbf{o}) + d_c(\mathbf{o},\mathbf{y})}$ in + +![](images/f56f86165a9337106089858b815acd9af32450b7ba4375674218fd25f1bbcd70.jpg) +(a) Distance ratio + +![](images/9d70f64c6295b099f79606f4c25073ceb719dcd6804acd8a9ca0c239fd88c02a.jpg) +(b) A toy example of knowledge graph + +![](images/897f4044b6e923119696e78fc1316dbf1acf3ffa9c692a48f19038a7f7b36c7f.jpg) +(c) Knowledge graph in Poincaré disk +Figure 2: Hyperbolic space + +Figure 2(a). Compare with Euclidean space where the distance ratio $\frac{d_e(\mathbf{x},\mathbf{y})}{d_e(\mathbf{x},\mathbf{o}) + d_e(\mathbf{o},\mathbf{y})}$ ( $d_{e}(\cdot)$ represents Euclidean distance) is constant, the hyperbolic distance ratio approaches to 1 exponentially as $r \to 1$ . Equivalently, the shortest path from $\mathbf{x}$ to $\mathbf{y}$ is almost the same as the path through the origin as $r \to 1$ . It is analogous to the property of tree data structure in which the shortest path between two sibling nodes is the path through their parent (Sala et al., 2018). + +KGs often exhibit hierarchical structures and the number of nodes grows exponentially as the level increases. Figure 2(b) shows a tree-like KG with a branching factor 3. We can see that the number of nodes at each level grows exponentially with their distance to the root of the tree. Due to this property, hyperbolic embeddings offer excellent quality for KG representation. As an example, we embed the toy KG into a 2-dimensional Poincaré disk. The hyperbolic embeddings enable all connected entities in the KG to be spaced equally far apart in 2-dimensional hyperbolic space and the hierarchical structure is preserved. + +# 4 Methodology + +We first associate each entity $e$ with a vector as the feature. Specifically, we follow RNM (Zhu et al., 2021) to initialize the entity vector with the pre-trained semantic embedding $\mathbf{x}^E$ , which can represent the lingual information of entity names. However, existing pre-trained word embedding is learned from Euclidean neural networks or in Euclidean space. To address this problem, we map Euclidean features into the hyperboloid manifold by: + +$$ +\mathbf {x} ^ {H} = e x p _ {\mathbf {o}} ^ {c} \mathbf {x} ^ {E}, \tag {4} +$$ + +where $\mathbf{o} = \{0,0,\dots ,0\} \in R^{d}$ is the original point. We consider $\mathbf{x}^E$ as a vector in the tangent space + +where $\mathbf{o}$ is the reference point. The exponential map function $exp_{\mathbf{v}}^{c}\mathbf{x}$ (Ganea et al., 2018) projects a tangent vector $\mathbf{x}^E$ into the hyperbolic space at $\mathbf{v}$ : + +$$ +\exp_ {\mathbf {v}} ^ {c} \mathbf {x} = \cosh (\sqrt {c} \| \mathbf {x} \|) \mathbf {v} + \frac {1}{\sqrt {c}} \sinh (\sqrt {c} \| \mathbf {x} \|) \frac {\mathbf {x}}{| | \mathbf {x} | |}. \tag {5} +$$ + +To utilize the structure of the KGs for entity alignments, we introduce the aggregation operation in hyperbolic space. On each knowledge graph, we aggregate neighbor's vectors with that of the center entity by: + +$$ +\mathbf {z} _ {i} ^ {(k + 1)} = \mathbf {h} _ {i} ^ {(k)} \oplus_ {c} \mathbf {n} _ {i} ^ {(k)}, k = 0, 1, \dots , K - 1 \tag {6} +$$ + +where $\mathbf{h}_i^{(0)} = \mathbf{x}_i^H$ is the input feature and $\mathbf{n}_{\mathbf{i}}^{(\mathbf{k})}$ is of aggregation of neighbor's vectors according to their importance to the center entity. We use $K$ to denote the depth. The existing attention mechanism utilizes the neural network layer to learn the weight as the importance of each neighbor. However, the linear function of the neural network layer is defined in the Euclidean space. To address this problem, we use the logarithmic function (Ganea et al., 2018) to map the hyperbolic vector into the tangent space at point $\mathbf{u}$ : + +$$ +\log_ {\mathbf {u}} ^ {c} (\mathbf {y}) = \mathbf {d} _ {\mathbf {c}} (\mathbf {u}, \mathbf {y}) \frac {\mathbf {y} + \mathbf {c} \langle \mathbf {u} , \mathbf {y} \rangle_ {\mathcal {M}} \mathbf {u}}{| | \mathbf {y} + \mathbf {c} \langle \mathbf {u} , \mathbf {y} \rangle_ {\mathcal {M}} \mathbf {u} | |}. \tag {7} +$$ + +Then we aggregate the neighbor's vector in the tangent space and map them back to hyperbolic space: + +$$ +\mathbf {n} _ {i} ^ {(k)} = e x p _ {\mathbf {h} _ {i} ^ {(k)}} ^ {c} \Big (\sum_ {j \in \mathcal {N} (i)} \alpha_ {i, j} l o g _ {\mathbf {h} _ {i} ^ {(k)}} ^ {c} \left(\mathbf {h} _ {j} ^ {(k)}\right) \Big), \tag {8} +$$ + +where $\mathcal{N}(i)$ contains all neighbors of entity $i$ and we consider the KG as an undirected graph. The importance $\alpha_{i,j}$ of neighbor $j$ is learned from: + +$$ +\alpha_ {i, j} = \underset {j \in \mathcal {N} (i)} {\operatorname {S o f t m a x}} \left(\mathbf {Q} ^ {(k)} \cdot \operatorname {C O N C} \left(\left(\log_ {\mathbf {o}} ^ {c} \mathbf {h} _ {i} ^ {(k)}\right), \right. \right. \tag {9} +$$ + +$$ +\left. \log_ {\mathbf {o}} ^ {c} \mathbf {h} _ {j} ^ {(k)}\right)) + \mathbf {q} ^ {(k)}). +$$ + +Here we use $\mathrm{CONC}(\cdot, \cdot)$ to denote the concatenation operation, and the Softmax function is used to normalize the weights. The attention mechanism enhances the important neighbors. + +To further reduce the dimension of the hyperbolic vectors, we feed $\mathbf{z}_i^{(k + 1)}$ into a hyperbolic linear layer: + +$$ +\mathbf {h} _ {i} ^ {(k + 1)} = \exp_ {\mathbf {o}} ^ {c} \left(\mathbf {W} ^ {(k)} \log_ {\mathbf {o}} ^ {c} \left(\mathbf {z} _ {i} ^ {(k + 1)}\right)\right) \oplus_ {c} \mathbf {b} ^ {(\mathbf {k})}, \tag {10} +$$ + +where both $\mathbf{W}^{(\mathbf{k})}$ and $\mathbf{b}^{(\mathbf{k})}$ are learnable parameters. By iteratively executing Equations (6) and (10) for $K$ times, we get a low dimensional vector in hyperbolic space for each entity. We remark that both Equations (6) and (10) are crucial to entity matching. Equation (6) can embed the structure information of KG and allows us to learn the smooth hyperbolic embeddings. Equation (10) can further learn information from the hidden state and reduce the dimension of the hyperbolic embeddings, which enables us to represent rich information with low dimensional vectors. + +Find Equivalent Entities. Note that we have described how to learn hyperbolic embeddings for a single KG. Different from optimizing the embeddings of entities by only considering a single KG, we match two KGs in the same hyperbolic space to fine-tune the embeddings and learn a mapping function to find the equivalent pairs of entities. + +Existing works either utilize a unified linear transformation function for all entities or directly enforce two embeddings close to each other. However, these works ignore the asymmetry of structure on different KGs and the localized nature of relations. Intuitively, the relations between two entities may vary from one pair to another. For example, the equivalent entity of "New Orleans" is "La Nouvelle-Orléans" in the French Wikipedia KG. However, the corresponding entity "Times Square" in French is still "Times Square". These two relations are totally different. The first relation is translation but the second pair of entities are identical to each other. As another example, there are 30,291 inner-relations between 15,000 entities in DBpedia KG. However, there are only 26,638 inner-relations between these entities in Yago KG. As a result, the structure of one DBpedia sub-graph may be same as its counterpart in Yago KG, where the structure of another sub-graph may be totally different from its counterpart. The asymmetry of relations on different KGs makes it difficult to learn a unified relationship for all pairs. + +In this paper, we propose to use a localized mapping function. Specifically, for each entity, we learn the parameter in the mapping function from its hyperbolic embedding. Since KGs often exhibit hierarchies, the root node or the node at a low level on the KG tends to be located near the original point in the hyperbolic space generally. Considering this property, we design a parameterized geometric mapping function where the parameters are computed from the hyperbolic embedding. + +Let $\mathbf{H}^1$ and $\mathbf{H}^2$ be the embeddings of entities after $K$ iterations on $G_{1}$ and $G_{2}$ , respectively. Suppose we have a pair of equivalent entities across $G_{1}$ and $G_{2}$ : $(e_i,e_j)$ where $e_i\in E_1$ and $e_j\in E_2$ . Our instance-specific mapping function is a rotation: + +$$ +f \left(\mathbf {H} _ {i} ^ {1}\right) = R o t \left(\theta_ {i}\right) \mathbf {H} _ {i} ^ {1} \tag {11} +$$ + +where $\mathbf{H}_i^1$ is the hyperbolic embedding of entity $e_i$ on $G_{1}$ and $Rot(\theta_i)$ is a block-diagonal matrix specified by $2\times 2$ matrices commonly used in numerical linear algebra: + +$$ +\begin{array}{l} R o t (\theta_ {i}) = d i a g (G (\theta_ {i, 1}), \dots , G (\theta_ {i, \frac {d}{2}})) \\ = \left[ \begin{array}{c c c c} \cos \left(\theta_ {i, 1}\right) & - \sin \left(\theta_ {i, 1}\right) & & \\ \sin \left(\theta_ {i, 1}\right) & \cos \left(\theta_ {i, 1}\right) & & \\ & & \ddots & \\ & & & \sin \left(\theta_ {i, \frac {d}{2}}\right) \\ & & & \sin \left(\theta_ {i, \frac {d}{2}}\right) \end{array} \quad \cos \left(\theta_ {i, \frac {d}{2}}\right) \right] \tag {12} \\ \end{array} +$$ + +The dimension $d$ is an even number. We use the $\mathbf{H}_i^1$ to calculate $\theta_{i} = (\theta_{i,1},\theta_{i,2},\dots ,\theta_{i,\frac{d}{2}})$ : + +$$ +\theta_ {i} = \exp_ {\mathbf {o}} ^ {c} \left(\mathbf {W} ^ {\prime} \log_ {\mathbf {o}} ^ {c} \left(\mathbf {H} _ {i} ^ {1}\right)\right) \oplus_ {c} \mathbf {b} ^ {\prime} \in R ^ {d / 2}, \tag {13} +$$ + +which makes the mapping adaptive to the entity. + +To train the model, we propose to minimize the hyperbolic distance between $f(\mathbf{H}_i^1)$ and $\mathbf{H}_j^2$ for each pair of $S = \{(e_i,e_j)|e_1\in E_1,e_2\in E_2\}$ . At the same time, we propose to push negative samples away from an entity. In order to achieve this, we design a loss function formulated as: + +$$ +\begin{array}{l} \operatorname {l o s s} = \sum_ {(e _ {i}, e _ {j}) \in S} d _ {c} (f (\mathbf {H} _ {i} ^ {1}), \mathbf {H} _ {j} ^ {2}) \\ - \sum_ {\left(e _ {i ^ {\prime}}, e _ {j ^ {\prime}}\right) \in S ^ {-}} d _ {c} \left(f \left(\mathbf {H} _ {i ^ {\prime}} ^ {1}\right), \mathbf {H} _ {j ^ {\prime}} ^ {2}\right) + \gamma , \tag {14} \\ \end{array} +$$ + +where $\gamma > 0$ is a margin hyper-parameter and $S^{-}$ represents the set of negative samples. We follow (Wu et al., 2019) to generate negative samples $S^{-}$ . + +Alignment Inference Strategy. Now we have the mapping function from $G_{1}$ to $G_{2}$ . For each entity $e_{i} \in E_{1}$ , we find the aligned entity $\tilde{e}_{j} \in E_{2}$ by: + +$$ +\tilde {e} _ {i} = \underset {\tilde {e} _ {j} \in E _ {2}} {\operatorname {a r g m i n}} d _ {c} \left(f \left(\mathbf {H} _ {i} ^ {1}\right), \mathbf {H} _ {j} ^ {2}\right). \tag {15} +$$ + +# 5 Experiment + +# 5.1 Experimental Setup + +Dataset. We choose three Benchmark datasets in the experiment: EN-FR, EN-DE and D-Y (Sun et al., 2020c). Specifically, EN-DE represents two cross-lingual (English-German) KGs of DBpedia, where each KG contains 15K entities. Likewise, EN-FR contains 15K matches between English DBpedia and French DBpedia. D-Y maps 15K entities of DBpedia KG to 15K entities of Yago KG. We follow previous work (Sun et al., 2020c) to split all entity pairs into $20\% / 10\% / 70\%$ for training, validation and test sets. + +Baselines. We compare our approach against 12 state-of-the-art entity alignment methods. These baselines can be classified into three categories: (1) triple-based (MTransE (Chen et al., 2017), IPTransE (Zhu et al., 2017), BootEA (Sun et al., 2018), RSNs (Guo et al., 2019) and MultiKE (Zhang et al., 2019)), (2) attributes-based (JAPE (Sun et al., 2017) and AttrE (Trisedya et al., 2019)) and (3) graph-based (GCNAlign (Wang et al., 2018), RDGCN (Wu et al., 2019), AliNet (Sun et al., 2020b), RNM (Zhu et al., 2021) and HyperKA (Sun et al., 2020a)). For all baselines, we use the default parameters as described in the corresponding paper. + +Model Variants. To demonstrate the effectiveness of different components of our model, we implement three variants of our Geometric method for Entity Alignment in Hyperbolic space (GEA-H), including (1) Xavier-I: a variant of GEA-H to initialize the entity vectors with Xavier normal initializer instead of pre-trained semantic embeddings; (2) Linear-T: replaces our geometric method with a linear transformation (Sun et al., 2020a). Note that we use a hyperbolic neural network layer for transformation; (3) Unified-R: a rotation function with unified parameters for all entities instead of learning from the hyperbolic embeddings. + +Performance Metrics. In our experiments, we use three widely used performance metrics: Hit@1, Hit@5 and MRR (Sun et al., 2020b; Wu et al., 2019; Zhang et al., 2019). Given an entity in one KG, we sort the list of entities in another KG according to the hyperbolic distance to the queried entity in ascending order. Then Hit@k counts the proportion of entities in the test set whose aligned entity is in the top $k$ list; while MRR averages the reciprocal ranks of the aligned entity in the sorted list. All reported performance results in the experi- + +ment were averaged over 3 runs. + +Model Configuration. We configure the negative constant curvature of hyperbolic space as a trainable parameter. We use a two-layer hyperbolic graph neural network, where the dimensions of hidden representations and output are 200 and 100 respectively by default. For the input layer, we initialize the entity vectors $\mathbf{x}$ with the pre-trained word embeddings (300-d) from the FastText model. If the entity name is null or not in the pre-trained dictionary, we use a random vector as initialization. For all baselines, we use the default parameters as described in the corresponding paper. In each epoch of the training process, we sample 125 negative pairs. We train our models using a Riemannian Adam optimizer with a learning rate of 0.001 and a weight decay of 0.01. All experiments are repeated 3 times and the average performance metrics over the 3 runs are reported to combat randomness. + +# 5.2 Result + +Quantitative Evaluation. Table 1 compares the alignment performance of the various approaches on three datasets, where the best results are shown in bold. We can see our full-fledged GEA-H consistently achieves the best performance on all three datasets, showing the advantages of GEA-H over entity alignment methods in Euclidean space. Specifically, our model gives $3\%$ improvement in Hit@1 over the best baseline on EN-DE and D-Y. The performance on EN-FR slightly decreases for all methods. However, our GEA-H still outperforms these baselines. We can also observe $20\%$ Hit@1, $10\%$ Hit@5 and 0.2 MRR improvements over HyperKA (the only baseline in hyperbolic space) on average. + +Several reasons lead to the advantage of our GEA-H over baselines. First, compared with these approaches based on embedding in Euclidean space, the hyperbolic embeddings can reserve hierarchical structures in KG with low dimension, which is vital for entity alignment. For example, an entity at low levels (e.g., "movie") is an unlikely equivalent to an entity (e.g., "Emma Stone"- the actress of La La Land) at high levels. Another important advantage of our GEA-H is that we use a geometric mapping method based on rotation instead of a linear transformation. Since the circle circumference and disc area grow exponentially with respect to the radius, the linear transformation can not address the non-linear nature of distance ratio in hyperbolic space. Our rotation method is designed + +Table 1: Overall performance comparison + +
ModelEN-DEEN-FRD-Y
Hit@1Hit@5MRRHit@1Hit@5MRRHit@1Hit@5MRR
Triple-basedMTransE0.3070.5180.4070.2470.4670.3510.4630.6750.559
IPTransE0.3500.5150.4300.1690.3200.2430.3130.4560.378
BootEA0.6750.8200.7400.5070.7180.6030.7390.8490.788
RSNs0.5870.7520.6620.3930.5950.4870.5140.6550.580
MultiKE0.7560.8090.7820.7490.8190.7820.9030.9390.920
Attributes-basedJAPE0.2880.5120.3940.2620.4970.3720.4690.6870.567
AttrE0.5170.6870.5970.4810.6710.5690.6680.8030.731
Graph-basedGCNAlign0.4810.6790.5710.3380.5890.4510.4650.6260.536
RDGCN0.8300.8950.8590.7550.8540.8000.9310.9690.949
AliNet0.6150.7710.6840.3870.6130.4870.5910.7220.650
RNM0.7310.8100.7680.6230.6900.6490.8340.8760.854
HyperKA0.6220.8270.7130.4030.6600.5190.6140.8060.699
Variants of GEA-HXavier-I0.6790.7570.7590.5640.6620.6290.6850.7960.739
Linear-T0.6740.7670.7180.5390.6450.5890.6540.7340.693
Unified-R0.7270.7790.7510.6640.7130.6870.8020.8520.824
Full-fledged model0.8630.9240.8910.7750.8570.8120.9670.9810.973
+ +![](images/6d86be1bfd567e935672af5bf15d3ffc00843cfa03ae4a9c25dad673b37c2c13.jpg) +(a) EN-DE + +![](images/9efbfd2844a2e4ed813a7e92c29a836fd630e01343504f00397af2addedc1bc0.jpg) +(b) EN-FR + +![](images/c259c7ad3c2900c83429b610416ad82eefff88730e32543b37d54ce3000b701a.jpg) +(c) D-Y + +![](images/37c3863f3fd3436dbf06000fff71fb5da7b7fe64a7f3fbcc1d435199ce9018a7.jpg) +Figure 3: H@1 performance comparison using varying dimensions +Figure 4: GPU memory cost and running time + +to address this problem. Lastly, we learn instance-specific mapping parameters for each entity instead of using unified parameters. The localized parameters will address the locality of mapping. + +# 5.3 Ablation Study + +Initialization Method. Initialization can have a significant impact on neural network models. Compared with GEA-H, Xavier-I uses the Xavier normal initializer (Sun et al., 2020a) to initialize the entity vectors. Experimental results of GEA-H out + +perform the Xavier-I by a large margin consistently and it verifies the effectiveness of our initialization. + +Effectiveness of Geometric Mapping. To verify the effectiveness of our rotation-based geometric mapping method, we replace Linear-T in HyperKA with a hyperbolic linear transformation for comparison. The experimental results in Table 1 show that our method outperforms Linear-T across all three datasets. This is because the linear transformation failed to address the non-linear distance ratio with respect to radius for nodes at different levels. + +Localized Mapping We also investigate the effectiveness of our instance-specific mapping function. The variant Unified-R uses a unified mapping function for all entities instead of learning from entity embeddings. We compare Unified-R with our GEA-H across three datasets. The results indicate that our localized mapping function increases performance significantly and it is essential to learn the parameters adaptively. + +![](images/aac20f3452f8bbdf909f45e3107a936b2a914296c7ee7c96b3a812a2cea372f3.jpg) +(a) Curvature learning +Figure 5: Effects of curvature + +![](images/0718d1450aad85a36f9cc1b2198c905c9c4ba3f9a6cecde93f5957b599a6096f.jpg) +(b) Hit@1 w.r.t. curvature + +Table 2: Performance w.r.t. Negative Sampling Ratio + +
RatioEN-DEEN-FRD-Y
Hit@1MRRHit@1MRRHit@1MRR
250.8450.8730.7460.7840.9310.945
500.8590.8840.7680.8060.9490.961
750.8630.8910.7750.8120.9670.973
1000.8640.8930.7760.8140.9710.973
1500.8700.8940.7720.8100.9700.969
+ +# 5.4 Sensitivity of Parameters + +Effect of Varying Dimensions. The dimension of hyperbolic space plays a vital role in the expressiveness of our hyperbolic KG embeddings. To demonstrate the effectiveness of dimensions, we vary the length of hyperbolic embeddings from 50 to 300 with an interval of 50. Figure 3 shows Hit@1 on three datasets with varying dimensions. We also investigate the effectiveness of dimension for baselines whose dimension can be configured. Note that the dimension for RDGCN and RNM is not configurable (fixed at 300). As the dimension approaches 50, we can observe that the performances of some baselines decrease drastically on all three datasets. Our model offers much better representation and achieves the best performance in low-dimensional space. It validates our hypothesis that our method can solve the entity alignment problem in a low-dimensional hyperbolic space with a promising result. In addition, as shown in Figure 4, the occupied GPU memory and running time increase drastically as the dimension increases. Therefore, there is a trade-off between accuracy and computational cost. + +Evaluation on number of negative instances. Note that we generate negative instances for training purposes. The performance of entity alignment is highly sensitive to the number of negative samples. In Table 2, we demonstrate the impact of sampling ratio (number of negative instances per pair) for GEA-H. It is clear that sampling more negative instances is beneficial for the performance of the model. On all three datasets, we observe limited performance improvement when the sampling ratio is beyond 75, which justifies our default parameter. The reason behind the performance return + +![](images/01988b89dcf2cb8fd609d292fbba834c47341a6e35f5779d7e89a6a84e5fcdf0.jpg) +Figure 6: Visualization in 2-d space + +is that there is trade-off between pushing negative samples away and minimizing the distance of positive samples. When there are too many negative samples, the loss of negative samples will have much more weight than positive samples. This will hurt the overall performance. Moreover, setting the sampling ratio too aggressively will only increase computation costs for training. + +# 5.5 Effect of curvature + +In hyperbolic space, the hierarchical structure can be reflected by the curvatures. With different values of curvature, the knowledge graph will be embedded into different hierarchical structures. In this paper, the value of curvature is trainable as a model parameter. Figure 5(a) shows the value of $c$ in the training process. Note that the curvature of our model is initialized as 1 at the beginning. We can see that the value of $c$ converges in the training process. To further investigate the effect of the curvature. We use a fixed curvature instead. As shown in Figure 5(b), the curvature learned by our model converges near the estimated optimal curvature. + +# 5.6 Visualization + +We visualize the 2-d embeddings (after mapping) for random pairs of entities from dataset "D-Y" in Poincaré disk. Figure 6 shows 200 pairs of entities in Poincaré disk, where entities on $G_{1}$ are marked with small circles and their corresponding entities on $G_{2}$ are marked with triangles. We can see that a pair of equivalent entities are closer to each other in the 2-d hyperbolic space. + +# 6 Conclusion + +We proposed GEA-H, a geometric entity alignment method in hyperbolic space. GEA-H learns low-dimensional hyperbolic embeddings for entities in KGs with an attention-based hyperbolic graph neural network. We design a geometric function based on rotation for mapping entities and learn the localized parameters of mapping function for each entity to address the locality of mapping. + +# Limitations + +GEA-H is focused on an important task: matching equivalent entities across knowledge graphs and providing new tools to study KGs. We do not make any statements regarding its performance beyond this scope. One limitation of our work is that it requires a set of 1-to-1 alignments for training purposes. These alignments are supposed to be labeled manually. + +# References + +Antoine Bordes, Nicolas Usunier, Alberto García-Durán, Jason Weston, and Oksana Yakhnenko. 2013. Translating Embeddings for Modeling Multirelational Data. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States, Christopher J. C. Burges, Léon Bottou, Zoubin Ghahramani, and Kili'an Q. Weinberger (Eds.). 2787-2795. +Michael M. Bronstein, Joan Bruna, Yann LeCun, Arthur Szlam, and Pierre Vandergheynst. 2017. Geometric Deep Learning: Going beyond Euclidean data. IEEE Signal Process. Mag. 34, 4 (2017), 18-42. https://doi.org/10.1109/MSP.2017.2693418 +Muhao Chen, Yingtao Tian, Mohan Yang, and Carlo Zaniolo. 2017. Multilingual Knowledge Graph Embeddings for Cross-lingual Knowledge Alignment. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI 2017, Melbourne, Australia, August 19-25, 2017, Carles Sierra (Ed.). ijcai.org, 1511-1517. https://doi.org/10.24963/ijcai.2017/209 +Wanyun Cui, Yanghua Xiao, Haixun Wang, Yangqiu Song, Seung-won Hwang, and Wei Wang. 2017. KBQA: Learning Question Answering over QA Corpora and Knowledge Bases. Proc. VLDB Endow. 10, 5 (2017), 565-576. https://doi.org/10.14778/3055540.3055549 +Octavian-Eugen Ganea, Gary Bécigneul, and Thomas Hofmann. 2018. Hyperbolic Neural Networks. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicolò Cesa-Bianchi, and Roman Garnett (Eds.). 5350-5360. +Congcong Ge, Xiaoze Liu, Lu Chen, Baihua Zheng, and Yunjun Gao. 2021. Make It Easy: An Effective End-to-End Entity Alignment Framework. In SIGIR 2021, Fernando Diaz, Chirag Shah, Torsten Suel, Pablo Castells, Rosie Jones, and Tetsuya Sakai (Eds.). ACM, 777-786. + +Aditya Grover and Jure Leskovec. 2016. node2vec: Scalable Feature Learning for Networks. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 13-17, 2016, Balaji Krishnapuram, Mohak Shah, Alexander J. Smola, Charu C. Aggarwal, Dou Shen, and Rajeev Rastogi (Eds.). ACM, 855-864. https://doi.org/10.1145/2939672.2939754 +Hao Guo, Jiuyang Tang, Weixin Zeng, Xiang Zhao, and Li Liu. 2021. Multi-modal entity alignment in hyperbolic space. Neurocomputing 461 (2021), 598-607. https://doi.org/10.1016/j.neucom.2021.03.132 +Lingbing Guo, Zequn Sun, and Wei Hu. 2019. Learning to Exploit Long-term Relational Dependencies in Knowledge Graphs. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA (Proceedings of Machine Learning Research, Vol. 97), Kamalika Chaudhuri and Ruslan Salakhutdinov (Eds.). PMLR, 2505-2514. http://proceedings.mlr.press/v97/guo19c.html +Richard A Harshman et al. 1970. Foundations of the PARAFAC procedure: Models and conditions for an" explanatory" multimodal factor analysis. (1970). +Bo Hui and Wei-Shinn Ku. 2022. Low-rank Nonnegative Tensor Decomposition in Hyperbolic Space. In KDD '22: The 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, August 14 - 18, 2022, Aidong Zhang and Huzefa Rangwala (Eds.). ACM, 646-654. https://doi.org/10.1145/3534678.3539317 +Bo Hui, Da Yan, Haiquan Chen, and Wei-Shinn Ku. 2022. Time-sensitive POI Recommendation by Tensor Completion with Side Information. In 38th IEEE International Conference on Data Engineering, ICDE 2022, Kuala Lumpur, Malaysia, May 9-12, 2022. IEEE, 205-217. https://doi.org/10.1109/ICDE53745.2022.00020 +Bo Hui, Da Yan, Wei-Shinn Ku, and Wenlu Wang. 2020. Predicting Economic Growth by Region Embedding: A Multigraph Convolutional Network Approach. In CIKM '20: The 29th ACM International Conference on Information and Knowledge Management, Virtual Event, Ireland, October 19-23, 2020, Mathieu d'Aquin, Stefan Dietze, Claudia Hauff, Edward Curry, and Philippe Cudre-Mauroux (Eds.). ACM, 555-564. https://doi.org/10.1145/3340531.3411882 +Viet-Phi Huynh and Paolo Papotti. 2019. Buckle: Evaluating Fact Checking Algorithms Built on Knowledge Bases. Proc. VLDB Endow. 12, 12 (2019), 1798-1801. https://doi.org/10.14778/3352063.3352069 +Chao Jiang, Yi He, Richard Chapman, and Hongyi Wu. 2022. Camouflaged Poisoning Attack on Graph Neu + +ral Networks. In ICMR '22: International Conference on Multimedia Retrieval, Newark, NJ, USA, June 27 - 30, 2022, Vincent Oria, Maria Luisa Sapino, Shin'ichi Satoh, Brigitte Kerhervé, WenHuang Cheng, Ichiro Ide, and Vivek K. Singh (Eds.). ACM, 451-461. https://doi.org/10.1145/3512527.3531373 +Ernesto Jiménez-Ruiz and Bernardo Cuenca Grau. 2011. LogMap: Logic-Based and Scalable Ontology Matching. In *The Semantic Web - ISWC 2011 - 10th International Semantic Web Conference*, Bonn, Germany, October 23-27, 2011, Proceedings, Part I (Lecture Notes in Computer Science, Vol. 7031), Lora Aroyo, Chris Welty, Harith Alani, Jamie Taylor, Abraham Bernstein, Lalana Kagal, Natasha Fridman Noy, and Eva Blomqvist (Eds.). Springer, 273-288. https://doi.org/10.1007/978-3-642-25073-6_18 +Thomas N. Kipf and Max Welling. 2017. Semi-Supervised Classification with Graph Convolutional Networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net. https://openreview.net/forum?id=SJU4ayYgl +Simon Lacoste-Julien, Konstantina Palla, Alex Davies, Gjergji Kasneci, Thore Graepel, and Zoubin Ghahramani. 2013. SIGMa: simple greedy matching for aligning large knowledge bases. In The 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2013, Chicago, IL, USA, August 11-14, 2013, Inderjit S. Dhillon, Yehuda Koren, Rayid Ghani, Ted E. Senator, Paul Bradley, Rajesh Parekh, Jingrui He, Robert L. Grossman, and Ramasamy Uthurusamy (Eds.). ACM, 572-580. https://doi.org/10.1145/2487575.2487592 +Bing Liu, Harrison Scells, Guido Zuccon, Wen Hua, and Genghong Zhao. 2021. ActiveEA: Active Learning for Neural Entity Alignment. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, Marie-Francine Moens, Xuanjing Huang, Lucia Specia, and Scott Wen-tau Yih (Eds.). Association for Computational Linguistics, 3364-3374. https://doi.org/10.18653/v1/2021.emnlp-main.270 +Xin Mao, Wenting Wang, Yuanbin Wu, and Man Lan. 2021. Boosting the Speed of Entity Alignment $10 \times$ : Dual Attention Matching Network with Normalized Hard Sample Mining. In WWW 2021, Jure Leskovec, Marko Grobelnik, Marc Najork, Jie Tang, and Leila Zia (Eds.). ACM / IW3C2, 821-832. +Xin Mao, Wenting Wang, Huimin Xu, Yuanbin Wu, and Man Lan. 2020. Relational Reflection Entity Alignment. In CIKM '20: The 29th ACM International Conference on Information and Knowledge Management, Virtual Event, Ireland, October 19-23, 2020, Mathieu d'Aquin, Stefan Dietze, Claudia Hauff, Edward Curry, and Philippe Cudré-Mauroux + +(Eds.). ACM, 1095-1104. https://doi.org/10.1145/3340531.3412001 +Maximilian Nickel and Douwe Kiela. 2017. Poincaré Embeddings for Learning Hierarchical Representations. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett (Eds.). 6338-6347. +Frederic Sala, Christopher De Sa, Albert Gu, and Christopher Ré. 2018. Representation Tradeoffs for Hyperbolic Embeddings. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholm, Sweden, July 10-15, 2018 (Proceedings of Machine Learning Research, Vol. 80), Jennifer G. Dy and Andreas Krause (Eds.). PMLR, 4457-4466. http://proceedings.mlr.press/v80/sala18a.html +Zequn Sun, Muhao Chen, Wei Hu, Chengming Wang, Jian Dai, and Wei Zhang. 2020a. Knowledge Association with Hyperbolic Knowledge Graph Embeddings. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, Bonnie Webber, Trevor Cohn, Yulan He, and Yang Liu (Eds.). Association for Computational Linguistics, 5704-5716. https://doi.org/10.18653/v1/2020.emnlp-main.460 +Zequn Sun, Wei Hu, and Chengkai Li. 2017. Cross-Lingual Entity Alignment via Joint Attribute-Preserving Embedding. In The Semantic Web - ISWC 2017 - 16th International Semantic Web Conference, Vienna, Austria, October 21-25, 2017, Proceedings, Part I (Lecture Notes in Computer Science, Vol. 10587), Claudia d'Amato, Miriam Fernandez, Valentina A. M. Tamma, Freddy Lecué, Philippe Cudré-Mauroux, Juan F. Sequeda, Christoph Lange, and Jeff Heflin (Eds.). Springer, 628-644. https://doi.org/10.1007/978-3-319-68288-4_37 +Zequn Sun, Wei Hu, Qingheng Zhang, and Yuzhong Qu. 2018. Bootstrapping Entity Alignment with Knowledge Graph Embedding. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden, Jérôme Lang (Ed.). ijcai.org, 4396-4402. https://doi.org/10.24963/ijcai.2018/611 +Zequn Sun, Chengming Wang, Wei Hu, Muhao Chen, Jian Dai, Wei Zhang, and Yuzhong Qu. 2020b. Knowledge Graph Alignment Network with Gated Multi-Hop Neighborhood Aggregation. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, + +NY, USA, February 7-12, 2020. AAAI Press, 222-229. https://aaai.org/ojs/index.php/AAAI/article/view/5354 +Zequn Sun, Qingheng Zhang, Wei Hu, Chengming Wang, Muhao Chen, Farahnaz Akrami, and Chengkai Li. 2020c. A Benchmarking Study of Embedding-based Entity Alignment for Knowledge Graphs. Proc. VLDB Endow. 13, 11 (2020), 2326-2340. http://www.vldb.org/pvldb/vol13/p2326-sun.pdf +Bayu Distiawan Trisedya, Jianzhong Qi, and Rui Zhang. 2019. Entity Alignment between Knowledge Graphs Using Attribute Embeddings. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019. AAAI Press, 297-304. https://doi.org/10.1609/aaai.v33i01.3301297 +Zhichun Wang, Qingsong Lv, Xiaohan Lan, and Yu Zhang. 2018. Cross-lingual Knowledge Graph Alignment via Graph Convolutional Networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, Ellen Riloff, David Chiang, Julia Hockenmaier, and Jun'ichi Tsujii (Eds.). Association for Computational Linguistics, 349-357. https://doi.org/10.18653/v1/d18-1032 +Yuting Wu, Xiao Liu, Yansong Feng, Zheng Wang, Rui Yan, and Dongyan Zhao. 2019. Relation-Aware Entity Alignment for Heterogeneous Knowledge Graphs. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, Sarit Kraus (Ed.). ijcai.org, 5278-5284. https://doi.org/10.24963/ijcai.2019/733 +Chengjin Xu, Fenglong Su, and Jens Lehmann. 2021. Time-aware Graph Neural Network for Entity Alignment between Temporal Knowledge Graphs. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, Marie-Francine Moens, Xuanjing Huang, Lucia Specia, and Scott Wen-tau Yih (Eds.). Association for Computational Linguistics, 8999-9010. https://doi.org/10.18653/v1/2021.emnlp-main.709 +Qingheng Zhang, Zequn Sun, Wei Hu, Muhao Chen, Lingbing Guo, and Yuzhong Qu. 2019. Multi-view Knowledge Graph Embedding for Entity Alignment. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, Sarit Kraus (Ed.). ijcai.org, 5429-5435. https://doi.org/10.24963/ijcai.2019/754 + +Xiang Zhao, Weixin Zeng, Jiuyang Tang, Wei Wang, and Fabian Suchanek. 2020. An Experimental Study of State-of-the-Art Entity Alignment Approaches. IEEE Transactions on Knowledge and Data Engineering (2020), 1-1. https://doi.org/10.1109/TKDE.2020.3018741 +Hao Zhu, Ruobing Xie, Zhiyuan Liu, and Maosong Sun. 2017. Iterative Entity Alignment via Joint Knowledge Embeddings. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI 2017, Melbourne, Australia, August 19-25, 2017, Carles Sierra (Ed.). ijcai.org, 4258-4264. https://doi.org/10.24963/ijcai.2017/595 +Yao Zhu, Hongzhi Liu, Zhonghai Wu, and Yingpeng Du. 2021. Relation-Aware Neighborhood Matching Model for Entity Alignment. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021. AAAI Press, 4749-4756. https://ojs.aaii.org/index.php/AAAI/article/view/16606 \ No newline at end of file diff --git a/alocalizedgeometricmethodtomatchknowledgeinlowdimensionalhyperbolicspace/images.zip b/alocalizedgeometricmethodtomatchknowledgeinlowdimensionalhyperbolicspace/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..4a892338620e2f500154e91f6ea2dd3608c1bf23 --- /dev/null +++ b/alocalizedgeometricmethodtomatchknowledgeinlowdimensionalhyperbolicspace/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b4675f726c3f03986bff672c668e43f7888b74169d7960537c48888c9efc5c7b +size 446152 diff --git a/alocalizedgeometricmethodtomatchknowledgeinlowdimensionalhyperbolicspace/layout.json b/alocalizedgeometricmethodtomatchknowledgeinlowdimensionalhyperbolicspace/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..0d8aec27b937642deef01b3e26b7634533e4f9ec --- /dev/null +++ b/alocalizedgeometricmethodtomatchknowledgeinlowdimensionalhyperbolicspace/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:59bcc50770a7b6605faa05a4455796775fcc9f95cbf29b50fe1abe5fa4324032 +size 391029 diff --git a/amajorobstaclefornlpresearchletstalkabouttimeallocation/1478657b-32e9-4378-a494-a77ba30246a1_content_list.json b/amajorobstaclefornlpresearchletstalkabouttimeallocation/1478657b-32e9-4378-a494-a77ba30246a1_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..d6f501e4aceeb4cd3d32e33c0582f81f73b6d38e --- /dev/null +++ b/amajorobstaclefornlpresearchletstalkabouttimeallocation/1478657b-32e9-4378-a494-a77ba30246a1_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6b4fc4570289d268d7bf9ec07d52caf464e32dd97eea68eddbf447cd249edbb5 +size 67938 diff --git a/amajorobstaclefornlpresearchletstalkabouttimeallocation/1478657b-32e9-4378-a494-a77ba30246a1_model.json b/amajorobstaclefornlpresearchletstalkabouttimeallocation/1478657b-32e9-4378-a494-a77ba30246a1_model.json new file mode 100644 index 0000000000000000000000000000000000000000..6581bb2d95dbda8fd50b301132ce135ae3e9aefb --- /dev/null +++ b/amajorobstaclefornlpresearchletstalkabouttimeallocation/1478657b-32e9-4378-a494-a77ba30246a1_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0f3d5de3d737f54c22b69abb3ba4a23fa746d072055dec9ef1d1fd24ed0ddce0 +size 84439 diff --git a/amajorobstaclefornlpresearchletstalkabouttimeallocation/1478657b-32e9-4378-a494-a77ba30246a1_origin.pdf b/amajorobstaclefornlpresearchletstalkabouttimeallocation/1478657b-32e9-4378-a494-a77ba30246a1_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..f82191d3014b9e213e3e6379781898cbd1b77411 --- /dev/null +++ b/amajorobstaclefornlpresearchletstalkabouttimeallocation/1478657b-32e9-4378-a494-a77ba30246a1_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:47d7280838a3abfabb3e69f82058476300cb4e5f4aab99770564e1f327177f9c +size 229124 diff --git a/amajorobstaclefornlpresearchletstalkabouttimeallocation/full.md b/amajorobstaclefornlpresearchletstalkabouttimeallocation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..f33dcd0cb977a9c428b48d49e4c0fed92822705d --- /dev/null +++ b/amajorobstaclefornlpresearchletstalkabouttimeallocation/full.md @@ -0,0 +1,209 @@ +# A Major Obstacle for NLP Research: Let's Talk about Time Allocation! + +Katharina Kann\* and Shiran Dude\* and Arya D. McCarthy\* + +$\spadesuit$ University of Colorado Boulder + +firstname.lastname@colorado.edu + +*Johns Hopkins University + +lastname@jhu.edu + +# Abstract + +The field of natural language processing (NLP) has grown over the last few years: conferences have become larger, we have published an incredible amount of papers, and state-of-the-art research has been implemented in a large variety of customer-facing products. However, this paper argues that we have been less successful than we should have been and reflects on where and how the field fails to tap its full potential. Specifically, we demonstrate that, in recent years, subpar time allocation has been a major obstacle for NLP research. We outline multiple concrete problems together with their negative consequences and, importantly, suggest remedies to improve the status quo. We hope that this paper will be a starting point for discussions around which common practices are – or are not – beneficial for NLP research. + +# 1 Introduction + +Why did I get nothing done today? is a question many people ask themselves frequently throughout their professional careers. Psychologists agree on good time management skills being of utmost importance for a healthy and productive lifestyle (Lakei, 1973; Claessens et al., 2007; Major et al., 2002, inter alia). However, many academics and industry researchers lack time management skills, working long days and getting not enough done – not even the interesting experiment they had wanted to start over a year ago. + +In this position paper, we argue that natural language processing (NLP) as a field has a similar problem: we do not allocate our time well. Instead, we spend it on things that seem more urgent than they are, are easy but unimportant, or result in the largest short-term gains. This paper identifies the largest traps the authors believe the NLP community falls into. We then provide, for each of the four identified problems (P1-P4), suggested remedies. While we know that – just as for individuals – change takes time, we hope that this paper, in combination with + +![](images/6e7c3ae4826c33d21a664af4d3849e175e5ddcbd77e94e45e40d8c0e9a73e2f3.jpg) +Figure 1: Avg. # of authors per paper; 2000-2021. + +the EMNLP 2022 special theme Open questions, major obstacles, and unresolved issues in NLP, will ignite critical discussions. + +Related Work Over the last couple of years, multiple papers have provided critical reflections on the state of affairs in NLP research: Bender and Koller (2020) criticizes the hype around language models and argues, similarly to Bisk et al. (2020), that true understanding is impossible when language is detached from the physical world. In contrast, Bowman (2022) talks about the risks associated with underclaiming. Turning to evaluation, Bowman and Dahl (2021) provides a critical view on benchmarking, and Rodriguez et al. (2021) proposes ways to improve leaderboards in order to truly track progress. Other position papers discuss the importance of data curation (Rogers, 2021) and the need for focusing on the user for natural language generation (Dudy et al., 2021; Flek, 2020). Bianchi and Hovy (2021) identifies general concerning trends in NLP research. Parcalabescu et al. (2021) discusses our use of the term multimodality and proposes to use task-specific definitions of multimodality in the machine learning era. Church (2020) discusses downward trends in reviewing quality and whether these can be mitigated. We add to those meta-level papers by discussing subpar use of time as a major problem. + +# 2 What Is Going Wrong? + +# 2.1 P1: Too Many Papers per Author + +The Situation Publications in NLP are cheap compared to many other fields: there is no need to set up complicated real-world experiments (as, e.g., in physics), existing data can be used for many studies, and lately even much of the code we use is readily available. Thus, the time from idea to final paper can be extremely short. Some researchers also split one substantial paper's work into 2-5 less dense and structurally similar papers. + +Consequently, NLP researchers publish a lot: Rei1 finds that the 14 most productive first authors in NLP published 9 (1 researcher), 6 (2 researchers), and 5 (11 researchers) papers in 2021. And this number only counts the most prestigious conferences in NLP: Google Scholar shows that, across all venues, the first 3 authors published 16, 7, and 7 papers. + +While some enjoy writing, many - especially junior - NLP researchers feel external pressure to publish in large volumes; quantity often overshadows quality of publications for hiring decisions, and PhD applicants struggle to find advisors if they do not have multiple relevant publications. + +Negative Consequences A straightforward consequence of the pressure to publish is that much of an NLP researcher's time goes into writing: conservatively assuming one week of full-time writing per paper, the authors with the most papers respectively spend 16, 7, and 7 weeks per year just writing; this is nearly $\frac{1}{3}$ of the most productive author's year. + +The second negative consequence is the time needed to review this many papers: reviewing one substantial paper would be quicker than reviewing 5 separate ones, especially if reviewers are not shared. This lowers review quality, frustrates authors, and causes errors to be missed. The latter then misinforms other researchers, also wasting their time. + +Third, the ongoing race for publications makes it difficult for researchers to stop and reflect on if what they are currently working on is worthwhile. It also leads to mixed feelings regarding the start of ambitious, high-risk/high-reward research: many researchers are scared away by the prospect of potentially not obtaining their expected outcomes and being unable to publish. Thus, the need to constantly produce large quantities of output not only reduces the quality of individual papers, but also hinders + +meaningful progress of the field by encouraging the pursuit of superficial research questions. + +Finally, thorough scholarship is extremely difficult in this environment. This leads to all sorts of shortcomings in NLP publications - missing references, mathematical errors, and even nonsensical experimental designs - which are then overlooked by overworked reviewers (Church, 2020). + +Suggested Remedies To change the state of the field, we can either change our expectations or the available opportunities. For the former, it is crucial that quality is valued more than quantity for hiring. To start, we recommend having reviews be publicly available (as done, e.g., by the Conference on Neural Information Processing Systems²), to help people from adjacent fields understand the value of a candidate's publication. Another option is to standardize requesting reviews from experts (in addition to letters of recommendation). To reduce the opportunities for submitting large amounts of less impactful papers, we could set an upper limit for the number of (first-author) papers one can submit. This could be a hard limit or a soft limit with a penalty for too many low-quality submissions, such as blocking papers with low average scores from resubmission for a fixed period of time.³ + +# 2.2 P2: Too Many Authors per Paper + +The Situation The second problem we highlight is the inverse of the first: too many authors per paper, given the strategies we employ to manage collaborations. As shown in Figure 1, author lists are, on average, becoming longer and longer: in 2000, the average number of authors on ACL and EMNLP papers was 2.25 and, respectively, 1.97, but that number had increased to 4.65 and, respectively, 4.49 in 2021. Large collaborations can greatly advance science and, if done well, are beneficial to all participating researchers. However, they also pose an unintended challenge: many times, each author's expected, as well as actual, contribution becomes unclear. The former is often a consequence of a lack of communication or team management skills. The latter is the result of NLP not having a standardized way to communicate each researcher's contribution to collaborative projects. + +In a traditional two-author setting with a student + +and their advisor, it is generally understood that the student does most of the hands-on work and their advisor guides the research and writing process. However, with more authors, the situation becomes less clear to both authors and readers. + +Negative Consequences When expected contributions are unclear to the authors themselves, it is easy to have too many cooks spoil the broth: e.g., one author could write one section while one of their colleagues rewrites another section in a way that makes combining them non-trivial and time-consuming. Additionally, being vague about each author's contributions can lead to friction around authorship, which take time as well as mental energy and a toll on the relationship between two people; also, authorship discussions tend to disadvantage members of underrepresented groups (Ni et al., 2021). + +Worse, however, is a situation in which it is the reader to whom it is not obvious what each authors' contribution has been. Some researchers giving authorship to people whose contribution was minimal devalues the time and work of middle authors who actually do contribute a lot. + +Another problem with too many authors is that miscommunication easily wastes time and resources. For instance, it is easy to be inconsistent if experiments are run by multiple researchers, who might not use the same codebase. + +Suggested Remedies In order to avoid situations where the contributions of individual authors are unclear to the reader – and, thus, accurate assignment of credit is impossible – we propose a straightforward solution that can completely eliminate this negative consequence of large collaborations: publishing a contribution statement (Brand et al., 2015) for each paper. This is common in other fields but very rare in NLP (a notable exception is, e.g., Srivastava et al. (2022)). Making a contribution statement mandatory for NLP publications would be easy but extremely effective. + +For group management, setting expectations together and communicating the expected roles of all involved parties, including the possible authorship order can save time and energy toll. We suggest that doing this right at the beginning of each collaborative project should become common practice in NLP ("#EMNLP2022Rule"). However, it has been shown that many principal investigators (PIs) + +lack training in lab and personnel management skills (Van Noorden, 2018). Thus, PIs and their research groups would likely benefit from explicit training. One possible way to achieve this could be to extend existing mentoring initiatives at NLP conferences to focus more on leadership skills. Another suggestion mentioned by Van Noorden (2018) – which we recommend for NLP – is that PIs should ask for feedback from their groups more regularly. + +# 2.3 P3: Gatekeeping + +The Situation We do like unconventional topics (e.g., the connection between synesthesia and character-level NLP models (Kann and Monsalve-Mercado, 2021)), and statements like "This work is too interdisciplinary to get accepted" or "This work would be better for a workshop on a specific topic" are hardly ever true. However, reviewers in NLP like papers that resemble those they themselves have previously published. They only accept non-mainstream submissions if they are written in a very specific style: authors need to know how to pitch a topic to the NLP community. + +For readers new to publishing in NLP, here are the basic guidelines we have found for getting a paper accepted – many of which are nonsensical: 1) Your submitted paper should always have the exact maximum number of pages – not a line more or less. 2) The first section should be called Introduction. 3) The last section should be called Conclusion – not Discussion or similar. 4) You should have a figure that is (somewhat) related to your paper's content on the top right corner of the first page. 5) You should have equations in your paper – complicated equations will increase your chances of acceptance (Lipton and Steinhardt, 2019). 6) Do not explicitly write out popular model equations, e.g., for the LSTM (Hochreiter and Schmidhuber, 1997). 7) The Related Work section should come immediately before the Conclusion, to make your novelty seem larger. 8) Do not present only a dataset—provide empirical results, even if they are unimportant. + +Negative Consequences This gatekeeping especially affects people whose research mentors are not able to teach them the style of the NLP community: 1) people from universities with little experience in NLP research, 2) researchers from countries not traditionally part of the international NLP community, and 3) people from adjacent fields, such as psychology, social science, or even linguistics. + +Thus, gatekeeping reinforces existing social in + +equalities and harms our research progress, as we get exposed to groundbreaking ideas later than necessary – or never. It is also a huge waste of our time: for instance, there is no reason why content presented in 7.56 pages should be less impactful than content presented in 8 pages. However, we, as a community, make it an issue and cause researchers to waste hours trimming or extending papers. Similarly, we force people to waste their time thinking about which equations they can put into a paper that does not, in fact, benefit from them. + +Suggested Remedies We argue that resolving the problem of gatekeeping is crucial in order to allow our field to grow in a healthy way. We make two suggestions: 1) We need to explicitly educate reviewers to not take superficial properties of papers into account. This could be implemented, e.g., in the form of mandatory training videos for all ACL reviewers. However, this is a type of implicit bias (Greenwald and Banaji, 1995) and we encourage more discussion on possible solutions. 2) While we are waiting for this to be effective, we need to level the playing field by making unofficial rules and tricks widely known. The easiest way would be to publish explanations for first-time submitters together with calls for papers. Mentoring programs are great alternatives: while they are timewise costly for individuals, they will, in the long run, save time for the field as a whole. + +# 2.4 P4: Missing the Point + +The Situation NLP aims to build technology that improves the lives of its end users. However, NLP research is often purely technically driven, and actual human needs are investigated little or not at all (Flek, 2020; Dudy et al., 2021); this is especially prevalent when building tools for communities speaking low-resource languages (Caselli et al., 2021). This can – and does – result in researchers focusing on irrelevant problems. A similar problem is what we call legacy research questions: research questions that are motivated by problems or tools that are no longer relevant. Examples pointed out by Bowman (2022) are papers motivated by the brittleness of question answering (QA) systems whose performance has long been surpassed by the state of the art or an analysis and drawing of conclusions based on outdated systems like BERT (Devlin et al., 2019).6 + +To quantify this problem, we performed a case study by randomly sampling and examining 30 papers from human-oriented tracks at EMNLP 2021. Only 3 papers engaged with users through evaluation and only 2 papers grounded their research questions in user needs; details can be found in Appendix A. + +Last, looking at recent top-tier conferences in the field of NLP, a substantial amount of papers focus on what we call quick research questions, i.e., projects which maximize short-term gains for the researcher(s): Baden et al. (2022) identify that the majority of NLP research for text analysis is devoted to "easy problems", instead of aiming to "measure much more demanding constructs." + +Negative Consequences Work that is missing the point does not move the field in a meaningful direction. It wastes the researcher's time by detracting from topics that truly benefit the community, the public, or the researcher themselves. Next, they waste the reviewers' time as well as the general reader's time by failing to provide insights. They also needlessly use computing resources, thus contributing to the climate crisis (Strubell et al., 2019). Ignoring user needs further dangerously bears the risk of causing real harm to stakeholders (Raji et al., 2022). Designing technology without the participation of potential users has in the past led to spectacular product failures (Johnson, 2021; Simon, 2020). + +Finally, work on superficial research questions can be fast and result in a large amount of research output. In our current system that values quantity over quality for hiring, researchers working on superficial questions tend to have more successful careers. This, in turn, encourages new researchers to also waste their time by doing something similar. + +Suggested Remedies It is important for NLP researchers to engage more with the intended users of the technology we build. This could be encouraged during the review process, e.g., with targeted questions. Legacy research questions will need to be detected during reviewing as well - raising awareness of this phenomenon will likely reduce impacted submissions and acceptance of papers focused on legacy research questions alike. Regarding quick research questions, one of the remedies suggested for P1 could be a possible solution here as well: moving towards valuing quality over quantity. + +# 3 Conclusion + +In this paper, we outlined how several problematic practices in NLP research lead to a waste of the most important resource we have – our time – and, thus, constitute major obstacles for NLP research. We suggested multiple possible solutions to existing problems. We hope to foster much-needed discussion around how we, as a community, envision moving forward in the face of these concerns. + +# Limitations + +As we focus on time allocation, this is not an exhaustive list of problems we see in our research community. However, other concerns are beyond the scope of this work. Similarly, not all mentioned problems apply to all groups – it is, for instance, totally possible that individual groups excel at managing large collaborations. + +We further do not claim that our suggested remedies are perfect solutions. They come with their own sets of challenges and should be implemented with care: for instance, contribution statements could unintentionally minimize contributions that do not make it into the final paper. Additionally, we do not claim to have listed all possible remedies for the identified problems. By contrast, we explicitly encourage other researchers to start discussing ways to improve the status quo. + +# Acknowledgments + +We would like to thank the anonymous reviewers for their thought-provoking comments as well as the members of University of Colorado Boulder's NALA Group for their helpful feedback. This research was supported by the NSF National AI Institute for Student-AI Teaming (iSAT) under grant DRL 2019805. The opinions expressed are those of the authors and do not represent views of the NSF. ADM is supported by an Amazon Fellowship and a Frederick Jelinek Fellowship. + +# References + +Reinald Kim Amplayo, Stefanos Angelidis, and Mirella Lapata. 2021. Aspect-controllable opinion summarization. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6578-6593. +Christian Baden, Christian Pipal, Martijn Schoonvelde, and Mariken A. C. G van der Velden. 2022. Three gaps in computational text analysis methods for social + +sciences: A research agenda. Communication Methods and Measures, 16(1):1-18. +Cristian-Paul Bara, CH-Wang Sky, and Joyce Chai. 2021. Mindcraft: Theory of mind modeling for situated dialogue in collaborative tasks. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1112-1125. +Emily M. Bender and Alexander Koller. 2020. Climbing towards NLU: On meaning, form, and understanding in the age of data. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5185-5198, Online. Association for Computational Linguistics. +Federico Bianchi and Dirk Hovy. 2021. On the gap between adoption and understanding in NLP. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP* 2021, pages 3895-3901, Online. Association for Computational Linguistics. +Yonatan Bisk, Ari Holtzman, Jesse Thomason, Jacob Andreas, Yoshua Bengio, Joyce Chai, Mirella Lapata, Angeliki Lazaridou, Jonathan May, Aleksandr Nisnevich, Nicolas Pinto, and Joseph Turian. 2020. Experience grounds language. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8718-8735, Online. Association for Computational Linguistics. +Samuel Bowman. 2022. The dangers of underclaiming: Reasons for caution when reporting how NLP systems fail. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7484-7499, Dublin, Ireland. Association for Computational Linguistics. +Samuel R. Bowman and George Dahl. 2021. What will it take to fix benchmarking in natural language understanding? In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4843-4855, Online. Association for Computational Linguistics. +Amy Brand, Liz Allen, Micah Altman, Marjorie Hlava, and Jo Scott. 2015. Beyond authorship: attribution, contribution, collaboration, and credit. *Learned Publishing*, 28(2):151-155. +Tommaso Caselli, Roberto Cabin, Costanza Conforti, Enrique Encinas, and Maurizio Teli. 2021. Guiding principles for participatory design-inspired natural language processing. In Proceedings of the 1st Workshop on NLP for Positive Impact, pages 27-35, Online. Association for Computational Linguistics. +Guanhua Chen, Shuming Ma, Yun Chen, Li Dong, Dongdong Zhang, Jia Pan, Wenping Wang, and Furu Wei. 2021. Zero-shot cross-lingual transfer of neural machine translation with multilingual pretrained encoders. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 15-26. + +Kenneth Ward Church. 2020. Emerging trends: Reviewing the reviewers (again). Natural Language Engineering, 26(2):245-257. +Brigitte JC Claessens, Wendelien Van Eerde, Christel G Rutte, and Robert A Roe. 2007. A review of the time management literature. Personnel review. +Christopher Clark, Jordi Salvador, Dustin Schwenk, Derrick Bonafilia, Mark Yatskar, Eric Kolve, Alvaro Herrasti, Jonghyun Choi, Sachin Mehta, Sam Skjonsberg, et al. 2021. Iconary: A pictionary-based game for testing multimodal communication with drawings and text. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1864-1886. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. +Shiran Dudy, Steven Bedrick, and Bonnie Webber. 2021. Refocusing on relevance: Personalization in NLG. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5190-5202, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. +Avia Efrat, Uri Shaham, Dan Kilman, and Omer Levy. 2021. Cryptonite: A cryptic crossword benchmark for extreme ambiguity in language. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 4186-4192. +Tobias Falke and Patrick Lehnen. 2021. Feedback attribution for counterfactual bandit learning in multi-domain spoken language understanding. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1190-1198. +Lucie Flek. 2020. Returning the n to nlp: Towards contextually personalized classification models. In Proceedings of the 58th annual meeting of the association for computational linguistics, pages 7828-7838. +Siddhant Garg and Alessandro Moschitti. 2021. Will this question be answered? question filtering via answer model distillation for efficient question answering. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7329-7346. +Sebastian Gehrmann, Tosin Adewumi, Karmanya Aggarwal, Pawan Sasanka Ammanamanchi, Anuoluwapo Aremu, Antoine Bosselut, Khyathi Raghavi Chandu, Miruna-Adriana Clinciu, Dipanjan Das, Kaustubh Dhole, Wanyu Du, Esin Durmus, Ondrej Dusek, Chris Chinenye Emezue, Varun Gangal, Cristina Garbacea, Tatsunori Hashimoto, Yufang Hou, Yacine Jernite, Harsh Jhamtani, Yangfeng Ji, Shailza Jolly, Mihir + +Kale, Dhruv Kumar, Faisal Ladhak, Aman Madaan, Mounica Maddela, Khyati Mahajan, Saad Mahamood, Bodhisattwa Prasad Majumder, Pedro Henrique Martins, Angelina McMillan-Major, Simon Mille, Emiel van Miltenburg, Moin Nadeem, Shashi Narayan, Vitaly Nikolaev, Andre Niyongabo Rubungo, Salomey Osei, Ankur Parikh, Laura Perez-Beltrachini, Niranjan Ramesh Rao, Vikas Raunak, Juan Diego Rodriguez, Sashank Santhanam, Joao Sedoc, Thibault Sellam, Samira Shaikh, Anastasia Shimorina, Marco Antonio Sobrevilla Cabezudo, Hendrik Strobelt, Nishant Subramani, Wei Xu, Diyi Yang, Akhila Yerukola, and Jiawei Zhou. 2021. The GEM benchmark: Natural language generation, its evaluation and metrics. In Proceedings of the 1st Workshop on Natural Language Generation, Evaluation, and Metrics (GEM 2021), pages 96-120, Online. Association for Computational Linguistics. +Sebastian Gehrmann, Abhik Bhattacharjee, Abinaya Mahendiran, Alex Wang, Alexandros Papangelis, Aman Madaan, Angelina McMillan-Major, Anna Shvets, Ashish Upadhyay, Bingsheng Yao, Bryan Wilie, Chandra Bhagavatula, Chaobin You, Craig Thomson, Cristina Garbacea, Dakuo Wang, Daniel Deutsch, Deyi Xiong, Di Jin, Dimitra Gkatzia, Dragomir Radev, Elizabeth Clark, Esin Durmus, Faisal Ladhak, Filip Ginter, Genta Indra Winata, Hendrik Strobelt, Hiroaki Hayashi, Jekaterina Novikova, Jenna Kanerva, Jenny Chim, Jiawei Zhou, Jordan Clive, Joshua Maynez, Joao Sedoc, Juraj Juraska, Kaustubh Dhole, Khyathi Raghavi Chandu, Leonardo F. R. Ribeiro, Lewis Tunstall, Li Zhang, Mahima Pushkarna, Mathias Creutz, Michael White, Mihir Sanjay Kale, Moussa Kamal Eddine, Nico Daheim, Nishant Subramani, Ondrej Dusek, Paul Pu Liang, Pawan Sasanka Ammanamanchi, Qi Zhu, Ratish Puduppully, Reno Kriz, Rifat Shahriyar, Ronald Cardenas, Saad Mahamood, Salomey Osei, Samuel Cahyawijaya, Sanja Štajner, Sebastien Montella, Shailza, Shailza Jolly, Simon Mille, Tahmid Hasan, Tianhao Shen, Tosin Adewumi, Vikas Raunak, Vipul Raheja, Vitaly Nikolaev, Vivian Tsai, Yacine Jernite, Ying Xu, Yisi Sang, Yixin Liu and Yufang Hou. 2022. GEMv2: Multilingual NLG benchmarking in a single line of code. arXiv preprint arXiv:2206.11249. +Daniela Gerz, Pei-Hao Su, Razvan Kusztos, Avishek Mondal, Michat Lis, Eshan Singhal, Nikola Mrksic, Tsung-Hsien Wen, and Ivan Vulic. 2021. Multilingual and cross-lingual intent detection from spoken data. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7468-7475. +Anthony G Greenwald and Mahzarin R Banaji. 1995. Implicit social cognition: attitudes, self-esteem, and stereotypes. Psychological review, 102(1):4. +Jia-Chen Gu, Zhenhua Ling, Yu Wu, Quan Liu, Zhigang Chen, and Xiaodan Zhu. 2021. Detecting speaker personas from conversational texts. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1126-1136. + +Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. *Neural Computation*, 9(8):1735-1780. +Canming Huang, Weinan He, and Yongmei Liu. 2021. Improving unsupervised commonsense reasoning using knowledge-enabled natural language inference. In *Findings of the Association for Computational Linguistics: EMNLP* 2021, pages 4875-4885. +Harsh Jhamtani, Varun Gangal, Eduard Hovy, and Taylor Berg-Kirkpatrick. 2021. Investigating robustness of dialog models to popular figurative language constructs. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7476-7485. +Khari Johnson. 2021. The efforts to make text-based ai less racist and terrible. https://tinyurl.com/5x8rah4s. Accessed: 17 June 2021. +Patrick Kahardipraja, Brielen Madureira, and David Schlangen. 2021. Towards incremental transformers: An empirical analysis of transformer models for incremental nlu. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1178-1189. +Ashwin Kalyan, Abhinav Kumar, Arjun Chandrasekaran, Ashish Sabharwal, and Peter Clark. 2021. How much coffee was consumed during emnlp 2019? fermi problems: A new reasoning challenge for ai. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7318-7328. +Katharina Kann and Mauro M. Monsalve-Mercado. 2021. Coloring the black box: What synesthesia tells us about character embeddings. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2673-2685, Online. Association for Computational Linguistics. +A Lakei. 1973. How to get control of your time and life. New York: Nal Penguin Inc. +Ofer Lavi, Ella Rabinovich, Segev Shlomov, David Boaz, Inbal Ronen, and Ateret Anaby Tavor. 2021. We've had this conversation before: A novel approach to measuring dialog similarity. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1169-1177. +Yunlong Liang, Chulun Zhou, Fandong Meng, Jinan Xu, Yufeng Chen, Jinsong Su, and Jie Zhou. 2021. Towards making the most of dialogue characteristics for neural chat translation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 67-79. +Zachary C. Lipton and Jacob Steinhardt. 2019. Troubling trends in machine learning scholarship: Some ml papers suffer from flaws that could mislead the public and stymie future research. Queue, 17(1):45-77. + +Dan Liu, Mengge Du, Xiaoxi Li, Ya Li, and Enhong Chen. 2021. Cross attention augmented transducer networks for simultaneous translation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 39-55. +Wenchang Ma, Ryuichi Takanobu, and Minlie Huang. 2021. Cr-walker: Tree-structured graph reasoning and dialog acts for conversational recommendation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1839-1851. +Andrea Madotto, Zhaojiang Lin, Zhenpeng Zhou, Seungwhan Moon, Paul A Crook, Bing Liu, Zhou Yu, Eunjoon Cho, Pascale Fung, and Zhiguang Wang. 2021. Continual learning in task-oriented dialogue systems. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7452-7467. +Virginia Smith Major, Katherine J Klein, and Mark G Ehrhart. 2002. Work time, work interference with family, and psychological distress. Journal of applied psychology, 87(3):427. +Nikita Moghe, Mark Steedman, and Alexandra Birch. 2021. Cross-lingual intermediate fine-tuning improves dialogue state tracking. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1137-1150. +Makayla Moster, Denae Ford, and Paige Rodeghero. 2021. "is my mic on?" preparing se students for collaborative remote work and hybrid team communication. In 2021 IEEE/ACM 43rd International Conference on Software Engineering: Software Engineering Education and Training (ICSE-SEET), pages 89-94. IEEE. +Linyong Nan, Dragomir Radev, Rui Zhang, Amrit Rau, Abhinand Sivaprasad, Chiachun Hsieh, Xiangru Tang, Aadit Vyas, Neha Verma, Pranav Krishna, Yangxiaokang Liu, Nadia Irwanto, Jessica Pan, Faiaz Rahman, Ahmad Zaidi, Mutethia Mutuma, Yasin Tarabar, Ankit Gupta, Tao Yu, Yi Chern Tan, Xi Victoria Lin, Caiming Xiong, Richard Socher, and Nazneen Fatema Rajani. 2021. DART: Open-domain structured data record to text generation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 432-447, Online. Association for Computational Linguistics. +Chaoqun Ni, Elise Smith, Haimiao Yuan, Vincent Lariviere, and Cassidy R. Sugimoto. 2021. The gendered nature of authorship. Science Advances, 7(36):eabe4639. +Xuan Ouyang, Shuohuan Wang, Chao Pang, Yu Sun, Hao Tian, Hua Wu, and Haifeng Wang. 2021. Ernie-m: Enhanced multilingual representation by aligning cross-lingual semantics with monolingual corpora. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 27-38. + +Letitia Parcalabescu, Nils Trost, and Anette Frank. 2021. What is multimodality? In Proceedings of the 1st Workshop on Multimodal Semantic Representations (MMSR), pages 1-10, Groningen, Netherlands (Online). Association for Computational Linguistics. +Dinesh Raghu, Shantanu Agarwal, Sachindra Joshi, et al. 2021. End-to-end learning of flowchart grounded task-oriented dialogs. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 4348-4366. +Inioluwa Deborah Raji, I Elizabeth Kumar, Aaron Horowitz, and Andrew Selfst. 2022. The fallacy of ai functionality. In 2022 ACM Conference on Fairness, Accountability, and Transparency, pages 959-972. +Pedro Rodriguez, Joe Barrow, Alexander Miserlis Hoyle, John P. Lalor, Robin Jia, and Jordan Boyd-Graber. 2021. Evaluation examples are not equally informative: How should that change NLP leaderboards? In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4486-4503, Online. Association for Computational Linguistics. +Anna Rogers. 2021. Changing the world by changing the data. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2182-2194, Online. Association for Computational Linguistics. +Elizabeth Salesky, David Etter, and Matt Post. 2021. Robust open-vocabulary translation from visual text representations. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7235-7252. +Simon. 2020. Google duplex: The effects of deception on well-being. https://tinyurl.com/2yadfuer. Accessed: 11 June 2020. +Jongyoon Song, Sungwon Kim, and Sungroh Yoon. 2021. Alignart: Non-autoregressive neural machine translation by jointly learning to estimate alignment and translate. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1-14. +Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R. Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, Agnieszka Kluska, Aitor Lewkowycz, Akshat Agarwal, Alethea Power, Alex Ray, Alex Warstadt, Alexander W. Kocurek, Ali Safaya, Ali Tazarv, Alice Xiang, Alicia Parrish, Allen Nie, Aman Hussain, Amanda Askell, Amanda Dsouza, Ambrose Slone, Ameet Rahane, Anantharaman S. Iyer, Anders Andreassen, Andrea Madotto, Andrea Santilli, + +Andreas Stuhlmuller, Andrew Dai, Andrew La, Andrew Lampinen, Andy Zou, Angela Jiang, Angelica Chen, Anh Vuong, Animesh Gupta, Anna Gottardi, Antonio Norelli, Anu Venkatesh, Arash Gholamidavoodi, Arfa Tabassum, Arul Menezes, Arun Kirubarajan, Asher Mullokandov, Ashish Sabharwal, Austin Herrick, Avia Efrat, Aykut Erdem, Ayla Karakas, B. Ryan Roberts, Bao Sheng Loe, Barret Zoph, Bartlomiej Bojanowski, Batuhan Özyurt, Behnam Hedayatnia, Behnam Neyshabur, Benjamin Inden, Benno Stein, Berk Ekmekci, Bill Yuchen Lin, Blake Howald, Cameron Diao, Cameron Dour, Catherine Stinson, Cedrick Argueta, César Ferri Ramírez, Chandan Singh, Charles Rathkopf, Chenlin Meng, Chitta Baral, Chiyu Wu, Chris Callison-Burch, Chris Waites, Christian Voigt, Christopher D. Manning, Christopher Potts, Cindy Ramirez, Clara E. Rivera, Clemencia Siro, Colin Raffel, Courtney Ashcraft, Cristina Garbacea, Damien Sileo, Dan Garrett, Dan Hendrycks, Dan Kilman, Dan Roth, Daniel Freeman, Daniel Khashabi, Daniel Levy, Daniel Mosegui González, Danielle Perszyk, Danny Hernandez, Danqi Chen, Daphne Ippolito, Dar Gilboa, David Dohan, David Drakard, David Jurgens, Debajyoti Datta, Deep Ganguli, Denis Emelin, Denis Kleyko, Deniz Yuret, Derek Chen, Derek Tam, Dieuwke Hupkes, Diganta Misra, Dilyar Buzan, Dimitri Coelho Mollo, Diyi Yang, Dong-Ho Lee, Ekaterina Shutova, Ekin Dogus Cubuk, Elad Segal, Eleanor Hagerman, Elizabeth Barnes, Elizabeth Donoway, Ellie Pavlick, Emanuele Rodola, Emma Lam, Eric Chu, Eric Tang, Erkut Erdem, Ernie Chang, Ethan A. Chi, Ethan Dyer, Ethan Jerzak, Ethan Kim, Eunice Engefu Manyasi, Evgenii Zheltonozhskii Fanyue Xia, Fatemeh Siar, Fernando Martínez-Plumed, Francesca Happé, Francois Chollet, Frieda Rong, Gaurav Mishra, Genta Indra Winata Gerard de Melo, German Kruszewski, Giambattista Parascandolo, Giorgio Mariani, Gloria Wang, Gonzalo Jaimovitch-López, Gregor Betz, Guy Gur-Ari, Hana Galijasevic, Hannah Kim, Hannah Rashkin, Hannaneh Hajishirzi, Harsh Mehta, Hayden Bogar, Henry Shevlin, Hinrich Schütze Hiromu Yakura Hongming Zhang Hugh Mee Wong Ian Ng Isaac Noble Jaap Jumelet Jack Geissinger Jackson Kernion Jacob Hilton Jaehoon Lee Jaime Fernández Fisac James B. Simon James Koppel James Zheng James Zou Jan Kocón Jana Thompson Jared Kaplan Jarema Radom Jascha Sohl-Dickstein Jason Phang Jason Wei Jason Yosinski Jekaterina Novikova Jelle Bosscher Jennifer Marsh Jeremy Kim Jeroen Taal Jesse Engel Jesujoba Alabi Jiacheng Xu Jiaming Song Jillian Tang Joan Waweru John Burden John Miller John U. Balis Jonathan Berant Jörg Frohberg Jos Rozen Jose Hernandez-Orallo Joseph Boudeman Joseph Jones Joshua B. Tenenbaum Joshua S. Rule Joyce Chua Kamil Kanclerz Karen Livescu Karl Krauth Karthik Gopalakrishnan Katerina Ignatyeva Katja Markert Kaustubh D. Dhole Kevin Gimpel Kevin Omondi Kory Mathewson Kristen Chiafullo Ksenia Shkaruta Kumar Shridhar Kyle McDonell Kyle Richardson Laria Reynolds Leo Gao Li Zhang Liam Dugan Lianhui Qin Lidia Contreras-Ochando LouisPhilippe Morency Luca Moschella Lucas Lam Lucy + +Noble, Ludwig Schmidt, Luheng He, Luis Oliveros Colón, Luke Metz, Lütfi Kerem Şenel, Maarten Bosma, Maarten Sap, Maartje ter Hoeve, Maheen Farooqi, Manaal Faruqui, Mantas Mazeika, Marco Baturan, Marco Marelli, Marco Maru, Maria Jose Ramírez Quintana, Marie Tolkiehn, Mario Giulianelli, Martha Lewis, Martin Potthast, Matthew L. Leavitt, Matthias Hagen, Mátías Schubert, Medina Orduna Baitemirova, Melody Arnaud, Melvin McElrath, Michael A. Yee, Michael Cohen, Michael Gu, Michael Ivanitskiy, Michael Starritt, Michael Strube, Michal Swedrowski, Michele Bevilacqua, Michihiro Yasunaga, Mihir Kale, Mike Cain, Mimee Xu, Mirac Suzgun, Mo Tiwari, Mohit Bansal, Moin Aminnaseri, Mor Geva, Mozhdeh Gheini, Mukund Varma T, Nanyun Peng, Nathan Chi, Nayeon Lee, Neta Gur-Ari Krakover, Nicholas Cameron, Nicholas Roberts, Nick Doiron, Nikita Nangia, Niklas Deckers, Niklas Muennighoff, Nitish Shirish Keskar, Niveditha S. Iyer, Noah Constant, Noah Fiedel, Nuan Wen, Oliver Zhang, Omar Agha, Omar Elbaghdadi, Omer Levy, Owain Evans, Pablo Antonio Moreno Casares, Parth Doshi, Pascale Fung, Paul Pu Liang, Paul Vicol, Pegah Alipoormolabashi, Peiyuan Liao, Percy Liang, Peter Chang, Peter Eckersley, Phu Mon Htut, Pinyu Hwang, Piotr Milkowski, Piyush Patil, Pouya Pezeshkpour, Priti Oli, Qiaozhu Mei, Qing Lyu, Qinlang Chen, Rabin Banjade, Rachel Etta Rudolph, Raefer Gabriel, Rahel Habacker, Ramón Risco Delgado, Raphael Milliere, Rhythm Garg, Richard Barnes, Rif A. Saurous, Riku Arakawa, Robbe Raymaekers, Robert Frank, Rohan Sikand, Roman Novak, Roman Sitelew, Ronan LeBras, Rosanne Liu, Rowan Jacobs, Rui Zhang, Ruslan Salakhutdinov, Ryan Chi, Ryan Lee, Ryan Stovall, Ryan Teehan, Ryan Yang, Sahib Singh, Saif M. Mohammad, Sajant Anand, Sam Dillavou, Sam Shleifer, Sam Wiseman, Samuel Gruetter, Samuel R. Bowman, Samuel S. Schoenholz, Sanghyun Han, Sanjeev Kwatra, Sarah A. Rous, Sarik Ghazarian, Sayan Ghosh, Sean Casey, Sebastian Bischoff, Sebastian Gehrmann, Sebastian Schuster, Sepideh Sadeghi, Shadi Hamdan, Sharon Zhou, Shashank Srivastava, Sherry Shi, Shikhar Singh, Shima Asaadi, Shixiang Shane Gu, Shubh Pachchigar, Shubham Toshniwal, Shyam Upadhyay, Debnath Shyamolima, Siamak Shakeri, Simon Thormeyer, Simone Melzi, Siva Reddy, Sneha Priscilla Makini, Soo-Hwan Lee, Spencer Torene, Sriharsha Hatwar Stanislas Dehaenestefan Divic Stefano Ermon Stella Biderman Stephanie Lin Stephen Prasad Steven T. Piantadosi Stuart M. Shieber Summer Misherghi Svetlana Kiritchenko Swaroop Mishra Tal Linzen Tal Schuster Tao LiTao Yu Tariq Ali Tatsu Hashimoto Te-Lin Wu Theo Desbordes Theodore Rothschild Thomas Phan Tianle Wang Tiberius Nkinyili Timo Schick Timofei Kornev Timothy Telleen-Lawton Titus Tunduny Tobias Gerstenberg Trenton Chang Trishala Neeraj Tushar Khot Tyler ShultzUri Shaham,Vedant Misra Vera Demberg Victoria Nyamai Vikas Raunak Vinay Ramasesh Vinay Uday Prabhu Vishakh Padmakumar Vivek Srikumar William Fedus William Saunders William Zhang Wout Vossen Xiang Ren Xiaoyu Tong Xinran Zhao Xinyi Wu Xudong Shen,Yadollah YaghoobzadehYair Lakretz + +Yangqiu Song, Yasaman Bahri, Yejin Choi, Yichi Yang, Yiding Hao, Yifu Chen, Yonatan Belinkov, Yu Hou, Yufang Hou, Yuntao Bai, Zachary Seid, Zhuoye Zhao, Zijian Wang, Zijie J. Wang, Zirui Wang, and Ziyi Wu. 2022. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. arXiv preprint arXiv:2206.04615. + +Emma Strubell, Ananya Ganesh, and Andrew McCallum. 2019. Energy and policy considerations for deep learning in nlp. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3645-3650. + +Richard Van Noorden. 2018. Leadership problems in the lab. Nature, 557(3). + +Ivan Vulic, Pei-Hao Su, Samuel Coope, Daniela Gerz, Paweł Budzianowski, Inigo Casanueva, Nikola Mrkšić, and Tsung-Hsien Wen. 2021. Convfit: Conversational fine-tuning of pretrained language models. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1151-1168. + +Haoran Xu, Benjamin Van Durme, and Kenton Murray. 2021. Bert, mbert, or bibert? a study on contextualized embeddings for neural machine translation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6663-6675. + +Shaolei Zhang and Yang Feng. 2021. Universal simultaneous machine translation with mixture-of-experts wait-k policy. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7306-7317. + +Shiyue Zhang and Mohit Bansal. 2021. Finding a balanced degree of automation for summary evaluation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6617-6632. + +Jinming Zhao, Philip Arthur, Gholamreza Haffari, Trevor Cohn, and Ehsan Shareghi. 2021a. It is not as good as you think! evaluating simultaneous machine translation on interpretation data. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6707-6715. + +Yangyang Zhao, Zhenyu Wang, Changxi Zhu, and Shihan Wang. 2021b. Efficient dialogue complementary policy learning via deep q-network policy and episodic memory policy. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 4311-4323. + +Kunrui Zhu, Yan Gao, Jiaqi Guo, and Jian-Guang Lou. 2021. Translating headers of tabular data: A pilot study of schema translation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 56-66. + +# A Appendix + +In Table 1 we provide the analysis conducted on selected EMNLP 2021 papers. Engaging with Users indicates that researchers engage with humans, either during the design phase or for evaluation. In our analysis none of the papers engage with users throughout the process, leaving humans only to the evaluation part (3 papers). User-driven indicates that the motivation is grounded in user needs (2 papers). The following tracks are considered: session 1: track A: Machine translation and multi-linguality 1, session 3: track B: Dialogue and interactive systems 1, session 4: track B: Dialogue and interactive systems 2, session 5: track A: question answering 1, session 6: track B: summarization, session 7: track A: machine translation and multi-linguality 2, session 7: track B: question answering 2. + +
PaperEngaging with UsersUser-driven
1AlgNART (Song et al., 2021)nono
2Zero-Shot Cross-Linguual Transfer (Chen et al., 2021)nono
3ERNIE-M (Ouyang et al., 2021)nono
4Cross attention augmented transducer (Liu et al., 2021)nono
5Translating Headers of Tabular Data (Zhu et al., 2021)nono
6Towards Making the Most (Liang et al., 2021)nono
7MindCraft (Bara et al., 2021)yesno
8Detecting Speaker Personas (Gu et al., 2021)nono
9Cross-lingual Intermediate Fine-tuning (Moghe et al., 2021)nono
10ConvFiT (Vulić et al., 2021)nono
11We’ve had this conversation before (Lavi et al., 2021)nono
12Towards Incremental Transformers (Kahardipraja et al., 2021)nono
13Feedback Attribution (Falke and Lehnen, 2021)noyes
14CR-Walker (Ma et al., 2021)nono
15Iconary (Clark et al., 2021)yesno
16Improving Unsupervised Commonsense (Huang et al., 2021)nono
17Cryptonite (Efrat et al., 2021)nono
18Efficient Dialogue Complementary Policy Learning (Zhao et al., 2021b)yesno
19End-to-End Learning of Flowchart (Raghu et al., 2021)noyes
20Aspect-Controllable Opinion Summarization (Amplayo et al., 2021)nono
21Finding a Balanced Degree of Automation (Zhang and Bansal, 2021)nono
22BERT, mBERT, or BiBERT (Xu et al., 2021)nono
23It Is Not As Good As You Think (Zhao et al., 2021a)nono
24Robust Open-Vocabulary Translation (Salesky et al., 2021)nono
25Universal Simultaneous Machine Translation (Zhang and Feng, 2021)nono
26How much coffee was consumed (Kalyan et al., 2021)nono
27Will this Question be Answered (Garg and Moschitti, 2021)nono
28Continual Learning (Madotto et al., 2021)nono
29Multilingual and Cross-Linguial Intent (Gerz et al., 2021)nono
30Investigating Robustness of Dialog Models (Jhamtani et al., 2021)nono
+ +Table 1: Our analysis of 30 randomly chosen papers from EMNLP 2021. \ No newline at end of file diff --git a/amajorobstaclefornlpresearchletstalkabouttimeallocation/images.zip b/amajorobstaclefornlpresearchletstalkabouttimeallocation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..b4a558c603b1e6a6cef1520effe53ebc092f4209 --- /dev/null +++ b/amajorobstaclefornlpresearchletstalkabouttimeallocation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b19512b8a14146afc56791ffef1a7e84855255b619c0e58b72cb98a651551b5c +size 190463 diff --git a/amajorobstaclefornlpresearchletstalkabouttimeallocation/layout.json b/amajorobstaclefornlpresearchletstalkabouttimeallocation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..9748b7f876a9c7ff320a35a03ce91123f0149481 --- /dev/null +++ b/amajorobstaclefornlpresearchletstalkabouttimeallocation/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d03b314a1ab3135b1e5cbd3214d1bb67cddfd9aeb6bf62fa6e060e5679082eb2 +size 241227 diff --git a/amalmetaknowledgedrivenfewshotadapterlearning/cfa4d7f3-8312-466e-8d0a-ba74f2a99b46_content_list.json b/amalmetaknowledgedrivenfewshotadapterlearning/cfa4d7f3-8312-466e-8d0a-ba74f2a99b46_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..c63734966a3eb4f6a84f1178b8132ec2e258a9a3 --- /dev/null +++ b/amalmetaknowledgedrivenfewshotadapterlearning/cfa4d7f3-8312-466e-8d0a-ba74f2a99b46_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2536c3b99b0592943a0b8dccf9ea1c871c83395b7a69e34fb6f2e93a25d6d415 +size 64265 diff --git a/amalmetaknowledgedrivenfewshotadapterlearning/cfa4d7f3-8312-466e-8d0a-ba74f2a99b46_model.json b/amalmetaknowledgedrivenfewshotadapterlearning/cfa4d7f3-8312-466e-8d0a-ba74f2a99b46_model.json new file mode 100644 index 0000000000000000000000000000000000000000..566fb5d0392cb66c7aa33f5acdfd941d23ef1934 --- /dev/null +++ b/amalmetaknowledgedrivenfewshotadapterlearning/cfa4d7f3-8312-466e-8d0a-ba74f2a99b46_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:057c9b43aa5a93ddf76747ec7d21b0c3b36bba3106bce9f57f7a1a9be3ad47f6 +size 77996 diff --git a/amalmetaknowledgedrivenfewshotadapterlearning/cfa4d7f3-8312-466e-8d0a-ba74f2a99b46_origin.pdf b/amalmetaknowledgedrivenfewshotadapterlearning/cfa4d7f3-8312-466e-8d0a-ba74f2a99b46_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..255c19c49eb09456bebd6706e3375bca6e6fbab5 --- /dev/null +++ b/amalmetaknowledgedrivenfewshotadapterlearning/cfa4d7f3-8312-466e-8d0a-ba74f2a99b46_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8c2eb119af9129b04e1bcd5bd632dae14e69503c192f27954bcaa6565ce5b6e3 +size 933482 diff --git a/amalmetaknowledgedrivenfewshotadapterlearning/full.md b/amalmetaknowledgedrivenfewshotadapterlearning/full.md new file mode 100644 index 0000000000000000000000000000000000000000..134c8ec4d7232f7bf020e5a69a058a67ecf27ac1 --- /dev/null +++ b/amalmetaknowledgedrivenfewshotadapterlearning/full.md @@ -0,0 +1,300 @@ +# AMAL: Meta Knowledge-Driven Few-Shot Adapter Learning + +S. K. Hong* +Samsung SDS +s.k.hong@samsung.com + +Tae Young Jang +Samsung SDS +tae10.jang@samsung.com + +# Abstract + +NLP has advanced greatly together with the proliferation of Transformer-based pre-trained language models. To adapt to a downstream task, the pre-trained language models need to be fine-tuned with a sufficient supply of annotated examples. In recent years, Adapter-based fine-tuning methods have expanded the applicability of pre-trained language models by substantially lowering the required amount of annotated examples. However, existing Adapter-based methods still fail to yield meaningful results in the few-shot regime where only a few annotated examples are provided. In this study, we present a meta-learning-driven low-rank adapter pooling method, called AMAL, for leveraging pre-trained language models even with just a few data points. We evaluate our method on five text classification benchmark datasets. The results show that AMAL significantly outperforms previous few-shot learning methods and achieves a new state-of-the-art. + +# 1 Introduction + +Since Transformer-based (Vaswani et al., 2017) pre-trained language models (PLMs) on massive corpora made a big impact on NLP, fine-tuning PLMs (Devlin et al., 2019; Lan et al., 2019; Liu et al., 2019) has led to large improvements in a variety of downstream NLP tasks. Yet, it is still challenging to fine-tune PLMs (Zhang et al., 2020) in the few-shot regime. Recently, Adapters (Houlsby et al., 2019a; Ben Zaken et al., 2022; Fu et al., 2022; Hu et al., 2021) have provided a method of fine-tuning PLMs more efficiently, by tuning some extra weights (the Adapters) while freezing the rest. Nevertheless, existing Adapters still fail to yield significant results in the few-shot regime. Refer to the Appendix table 4 for the performance of the prior Adapters on the few-shot classification problems. + +Since GPT-3 (Brown et al., 2020) was introduced, prompt tuning has swept the machine learning community. However, finding proper prompts (Schick and Schütze, 2020) is still a delicate task - requiring labor-intensive manual handcrafting with domain expertise as well as an in-depth understanding of the language model's inner mechanisms. + +In this paper, we present a cost-effective method for language model fine-tuning that is applicable, without customization, to a variety of language models and Adapter types. We focus on small to mid-sized language models such as BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), ALBERT (Lan et al., 2019), BART (Lewis et al., 2020), or DeBERTa (He et al., 2020), because they are widely deployed in production systems due to their economy and low carbon footprint. + +In this paper, we propose a meta-knowledgedriven few-shot adapter learning method, called AMAL (Adapter-by-Meta-Learning), based on a novel meta-learning framework, through which meta-level layer-wise adaptation kernels are derived in an end-to-end manner. Our design takes inspiration from (Aghajanyan et al., 2020), which proves that the over-parameterized pre-trained language models actually have low intrinsic dimension. We hypothesize that language model finetuning can be accomplished on a low intrinsic rank while keeping the pre-trained weights frozen, leading to our proposed low-rank adapter pooling approach. + +AMAL includes two key ideas: (1) construction of language model adapters' intrinsic kernels from tasks and (2) inference of the optimal task-specific language model adapter for a given task, by referring to a meta-level latent embedding space over all tasks. + +# 2 Related Work + +Few-shot Text Classification: DS (Bao et al., 2019) refers to the underlying word distributions + +across all available classes and specifies important lexical features for new classes. Frog-GNN (Xu and Xiang, 2021) focuses on all query-support pairs and proposes a multi-perspective aggregation-based graph neural network to explicitly reflect intra-class similarity and inter-class dissimilarity. LEA (Hong and Jang, 2022) proposes a meta learning-based document embedding approach and derives the meta-attention aspects dictionary to be reused when given a new task. + +Parameter-Efficient Fine-Tuning: Houlsby et al. (2019a) proposed two trainable adapter layers per Transformer block where each adapter has two feedforward linear layers: one down-project and one up-project layer. BitFit (Ben Zaken et al., 2022) shows that tuning just the bias terms of a PLM is almost as effective as full fine-tuning. AdapterBias (Fu et al., 2022) improves on BitFit by changing the bias terms to be token-specific, with less trainable parameters. LoRA (Hu et al., 2021) is also an adapter-based fine-tuning approach where trainable rank decomposition matrices are injected into each layer of the Transformer architecture while the weights of the pre-trained model are frozen. + +AMAL can be seen as similar with LoRA in terms of using the low-rank decomposition technique. However as a meta learning-based approach, AMAL can be applied to a broad range of language models and all existing adapter-based methods, including LoRA. + +# 3 Background + +# 3.1 Few-Shot Text Classification + +We deal with the few-shot text classification problem to demonstrate AMAL's few-shot language model adaptation performance. As usual, $C$ -way $K$ -shot indicates that $K$ -annotated examples are only given for each of the $C$ number of classes for a task (denoted as $\tau_{i}$ ), leading to the total number of examples as $K_{\tau_i} = K\times |\mathcal{C}|$ . + +# 3.2 Pre-Trained Language Models + +We experiment with BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), ALBERT (Lan et al., 2019), BART (Lewis et al., 2020) and DeBERTa (He et al., 2020) as the underlying PLMs. They add a dummy token to an original tokens sequence so that the PLMs end up with providing the corresponding embedding (denoted [CLS]). In this study, the [CLS] plays an role in probing the distinctive properties for every incoming task. + +![](images/4b52199a9e44e53e89bb4a2c3256f1e8f041f69244932fe94bfc589dfa17e091.jpg) +Figure 1: Low rank adapter pooling + +# 3.3 Meta Learning + +In the meta-learning setting, tasks are divided into a meta-training set $(\mathcal{S}^{tr})$ , meta-validation set $(\mathcal{S}^{val})$ and meta-test set $(\mathcal{S}^{test})$ as disjoint sets of classes. Our meta-learning strategy follows the overall procedure of optimization-based meta-learning (Finn et al., 2017) so that our proposed low-rank adapters are learned by alternating between two different complementary processes: (1) low-rank adapter pooling (inner update) and (2) meta-optimization (outer update). For a task $\tau_{i}\sim p(\tau)$ , the task data $\mathcal{D}_{\tau_i} = \{(x^i,y^i)\}$ consist of $\mathcal{D}_{\tau_i}^{tr}$ and $\mathcal{D}_{\tau_i}^{val}$ during the meta-training phase. In the meta-testing, the dataset of a new task $\tau_{i}$ is given as $\mathcal{D}_{\tau_i} = (\mathcal{D}_{\tau_i}^{tr},\mathcal{D}_{\tau_i}^{te})$ with a few annotated data points in the study. + +# 4 Proposed Method: AMAL + +In this section, we present the implementation of AMAL. The design implies the hypothesis that the language model adaptation can be performed on a low intrinsic rank. Here, we describe AMAL by employing the original Adapter (Houlsby et al., 2019b) method. Importantly, AMAL is orthogonal to existing Adapter methods and can be combined with any of them. AMAL offers a task-specific adapter for an incoming task. AMAL alternates between two update processes during meta-training: (1) low-rank adapter pooling and (2) meta-optimization. + +# 4.1 Low Rank Adapter Model + +As shown in Figure 1, as an element of the adapter, each projection matrix $\mathcal{P}_l\in \mathbb{R}^{d\times m}$ of the $l$ -th layer is decomposed into three matrices: + +$$ +\mathcal {P} _ {l} = \mathcal {U} _ {l} \times \mathcal {E} _ {l} ^ {\tau_ {i}} \times \mathcal {V} _ {l} ^ {T} \tag {1} +$$ + +where $l$ is the layer's index, and $\mathcal{U}_l\in \mathbb{R}^{d\times r}$ $\mathcal{E}_l^{\tau_i}\in \mathbb{R}^{r\times r}$ $\mathcal{V}_l\in \mathbb{R}^{m\times r}$ given the PLM's original dimension $d$ , the adapter's bottleneck dimension $m$ + +and the rank $r$ ( $r \ll \min(d, m)$ ). $\mathcal{E}_l^{\tau_i}$ is a diagonal matrix. For notational simplicity, we drop the distinction for the two different adapters (i.e., lower and upper) and likewise the distinction between up and down-projections. Importantly, $\mathcal{E}_l^{\tau_i}$ is the $l$ -th layer's low-rank adapter pooler for the task $\tau_i$ , $\mathcal{U}_l$ the $l$ -th layer's left adapter kernels, and $\mathcal{V}_l$ the right adapter kernels. + +# 4.2 Low Rank Adapter Pooling (inner update) + +The aim of the pooling is to derive the task-specific composition from the established adapter-kernels, $\mathcal{U}$ and $\nu$ , which are obtained in the meta-optimization process. + +To obtain the optimal adapter for a task $\tau_{i}$ , there are two important steps in the pooling process: (1) encoding the task $\tau_{i}$ into a low-dimensional latent embedding space $\mathcal{Z}$ and (2) producing the task-specific adapter pooler from the latent embedding $z^{\tau_i}$ . The encoding pipeline is taken from Rusu et al. (2018). The reason why we employ the latent embedding space is to enable AMAL to summarize the properties extracted from tasks into the low-dimensional latent space $\mathcal{Z}$ , instead of operating directly in the high dimensional parameter space. First, each task is fed into the encoding process, which is formulated as follows: + +$$ +z _ {n} ^ {\tau_ {i}} = \frac {1}{N K ^ {2}} \sum_ {k _ {n} = 1} ^ {K} \sum_ {n = 1} ^ {N} \sum_ {k _ {m} = 1} ^ {K} f _ {\theta_ {r}} \left(f _ {\theta_ {e}} \left(c _ {k _ {n}} ^ {\tau_ {i}}\right), f _ {\theta_ {e}} \left(c _ {k _ {m}} ^ {\tau_ {i}}\right)\right), \tag {2} +$$ + +where $z_{n}^{\tau_{i}}$ denotes the latent space embedding for the particular class $n$ under a given task $\tau_{i}$ , $N$ indicates the total number of classes under the task, $K$ denotes the total number of examples under each class, $f_{\theta_r}$ indicates the relation network (Sung et al., 2018), and $f_{\theta_e}$ is an encoder network to transform the delegate embedding [CLS] (denoted as $c_{j}^{\tau_{i}}$ for the case of the $j$ th text instance of a specific task $\tau_{i}$ ) prior to the relation network. As a result, the class embedding $z_{n}^{\tau_{i}}$ is led to keep track of the pairwise relationship with other classes, and the task-specific embedding $z^{\tau_{i}}$ is the concatenation of $z_{1}^{\tau_{i}}, \ldots, z_{n}^{\tau_{i}}$ . + +Subsequently, the task-specific latent embedding is delivered to the decoding process, which renders the latent embedding to generate the associated low-rank pooler. The decoding process is formulated as follows: + +$$ +\mathcal {E} ^ {\tau_ {i}} = f _ {\theta_ {d}} \left(z ^ {\tau_ {i}}\right) \tag {3} +$$ + +where $\mathcal{E}^{\tau_i}$ denotes the low rank adapter pooler for the task $\tau_{i}$ , $f_{\theta_r}$ indicates the decoder neural net + +Algorithm 1 Our Proposed Meta-Training +Require: Meta training set $S^{tr} \in \tau$ , $r$ (rank), $d$ , $m$ +Require: Learning rates $\alpha, \beta, \lambda, \gamma$ +Output: $\mathcal{U}, \mathcal{V}, \theta_e, \theta_r, \theta_d, \theta_\tau$ +1: Randomly initialize $\mathcal{U}, \mathcal{V}, \theta_e, \theta_r, \theta_d, \theta_\tau$ +2: Let $\phi = \{\mathcal{U}, \mathcal{V}, \theta_e, \theta_r, \theta_d, \theta_\tau\}$ +3: while not converged do +4: for number of tasks in batch do +5: Sample task instance $\tau_i \sim S^{tr}$ +6: Let $(\mathcal{D}^{tr}, \mathcal{D}^{val}) = \tau_i$ +7: Initialize $\theta_{\tau_i}' = \theta_\tau$ and $z^{\tau_i'} = z^{\tau_i}$ +8: for number of adaptation steps do +9: Encode [CLS] to $z^{\tau_i'}$ using $f_{\theta_e}$ and $f_{\theta_r}$ +10: Produce $\mathcal{E}_{\tau_i}'$ from $z^{\tau_i'}$ using $f_{\theta_d}$ +11: Generate document embeddings using $H^{\tau_i}$ +12: Compute Task-Adaptation loss $\mathcal{L}_{\tau_i}^{tr}$ +13: Perform gradient step w.r.t. $z^{\tau_i'}$ and $\theta_{\tau_i}'$ +14: $z^{\tau_i'} \gets z^{\tau_i'} - \alpha \nabla_{z^{\tau_i'}} \mathcal{L}_{\tau_i}^{tr}$ +15: $\theta_{\tau_i}' \gets \theta_{\tau_i}' - \alpha \nabla_{\theta_{\tau_i}'} \mathcal{L}_{\tau_i}^{tr}$ +16: end for +17: Generate document embeddings using $H^{\tau_i}$ +18: Compute Meta-Optimization loss $\mathcal{L}_{\tau_i}^{val}$ +19: end for +20: Perform gradient step w.r.t $\phi$ +21: $\phi \gets \phi - \beta \nabla_{\phi} \sum_{\tau_i} \mathcal{L}_{\tau_i}^{val} + \lambda \cdot \Omega + \gamma \cdot \mathcal{R}$ +22: end while + +work, and $z^{\tau_i}$ is the task's latent embedding. To sum up, a new task is eventually converted into the task-specific low-rank adapter pooler via modulation on the low-dimensional latent space. + +# 4.3 Meta-Optimization (outer update) + +As noted in Algorithm 1, AMAL updates three neural network blocks (i.e., $\theta_{e},\theta_{r},\theta_{d}$ ) as well as the left adapter kernels $\mathcal{U}$ and the right adapter kernels $\mathcal{V}$ , by minimizing the following objective function in the meta-optimization process: + +$$ +\min _ {\theta_ {e}, \theta_ {r}, \theta_ {d}, \mathcal {U}, \mathcal {V}} \sum_ {\tau_ {i}} \left(\mathcal {L} _ {\tau_ {i}} ^ {v a l} + \lambda \cdot \Omega + \gamma \cdot \mathcal {R}\right) \tag {4} +$$ + +where $\Omega$ indicates a weighted KL-divergence term, i.e., $D_{KL}(q(z^{\tau_i}|\mathcal{D}_n^{tr})||p(z^{\tau_i}))$ where $p(z^{\tau_i}) = \mathcal{N}(0,\mathcal{I})$ , to regularize the latent space with the aim to learn a disentangled embedding. $\mathcal{R}$ denotes a penalty term to attain near-orthogonality in the construction of $\mathcal{U}$ and $\mathcal{V}$ , and is formulated as follows: + +$$ +\mathcal {R} = \left\| \mathcal {U} \mathcal {U} ^ {T} - \mathcal {I} \right\| _ {F} + \left\| \mathcal {V} \mathcal {V} ^ {T} - \mathcal {I} \right\| _ {F} \tag {5} +$$ + +where $F$ denotes the Frobenius norm, and both $\mathcal{U}$ and $\mathcal{V}$ are randomly initialized. All the hyperparameters are equivalently kept all over the layers. + +Table 1: Results of 5-way 1-shot and 5-way 5-shot classification + +
AmazonHuffpostRCV-1Reuters20 Newsgroup
1-shot5-shot1-shot5-shot1-shot5-shot1-shot5-shot1-shot5-shot
MAML (Finn et al., 2017)50.36 %59.58 %43.04 %55.17 %51.15 %66.98 %46.31 %70.31 %31.39 %45.05 %
Proto (Snell et al., 2017)45.54 %71.30 %34.70 %50.69 %44.77 %58.91 %62.41 %73.05 %31.38 %37.02 %
LEO (Rusu et al., 2018)49.09 %59.48 %45.07 %60.69 %51.30 %63.90 %59.13 %73.10 %37.72 %48.08 %
Induction (Geng et al., 2019)45.17 %62.69 %46.51 %49.02 %43.82 %59.94 %61.48 %70.09 %32.12 %45.72 %
DS (Bao et al., 2019)62.6 %81.2 %43.0 %63.5 %54.1 %75.3 %81.8 %96 %52.1 %68.3 %
Frog-GNN (Xu and Xiang, 2021)71.5 %83.6 %54.1 %69.6 %------
LEA (Hong and Jang, 2022)63.6 %82.69 %46.98 %64.4 %51.96 %73.81 %71.64 %83.07 %43.56 %65.29 %
P-tuning v2 (Liu et al., 2022)BERT32.75 %66.87 %27.59 %50.67 %21.88 %36.67 %30.81 %84.80 %28.00 %59.67 %
RoBERTa27.89 %71.13 %31.69 %58.93 %22.33 %39.53 %29.61 %70.67 %25.36 %47.13 %
AMALBERT80.18 %89.07 %56.27 %74.31 %63.73 %83.11 %90.84 %97.87 %56.80 %70.49 %
ALBERT47.20 %78.49 %41.60 %61.66 %47.29 %76.09 %84.62 %94.49 %42.04 %65.60 %
RoBERTa76.36 %90.13 %55.11 %74.04 %71.73 %87.02 %92.0 %97.78 %60.27 %73.24 %
BART77.16 %89.60 %57.22 %75.02 %70.84 %86.22 %90.76 %97.42 %59.29 %73.51 %
DeBERTa-base76.71 %88.00 %54.31 %73.42 %71.56 %82.76 %85.42 %95.02 %52.18 %69.42 %
DeBERTa-large79.20 %90.58 %60.27 %78.04 %71.43 %84.44 %91.29 %97.86 %53.87 %75.29 %
+ +Note: The highest performance in each dataset is highlighted in Bold + +# 5 Experimental Results + +# 5.1 Document Embedding for Classification + +Here, we briefly explain how we generated document embeddings for our experiments. For a text input with length $L$ , we utilize the embedding vectors for the individual tokens from the last layer of the given PLM, which are denoted as $H_{j}^{\tau_{i}} = [h_{j,1}^{\tau_{i}},\dots,h_{j,L}^{\tau_{i}}]$ for the $j$ th text example of the task $\tau_{i}$ . For text classification, we average $H_{j}^{\tau_{i}}$ column-wise and then feed it into a fully connected neural network with the parameters $\theta_{\tau_i}'$ , which are optimized for the inner-update. + +# 5.2 Dataset and Baselines + +We evaluate AMAL on five text classification datasets: 20 Newsgroups (Lang, 1995), Huffpost headlines (Misra and Grover, 2021), Reuters-21578 (Lewis., 1997), RCV1 (Lewis et al., 2004) and Amazon product reviews (He and McAuley, 2016). We compare AMAL with eight baseline methods: MAML (Finn et al., 2017), PROTO (Snell et al., 2017), LEO (Rusu et al., 2018), INDUCTION (Geng et al., 2019), DS (Bao et al., 2019), FrogGNN (Xu and Xiang, 2021), LEA (Hong and Jang, 2022) and P-tuning v2 (Liu et al., 2022) as a prompt-based fine-tuning method. We follow the same experimental settings as in (Bragg et al., 2021) for all datasets, except for RCV-1 for which we use the split of (Bao et al., 2019). + +# 5.3 Overall Performance + +We evaluate AMAL in both 5-way 1-shot and 5-way 5-shot settings and the results are shown in Table 1. All the scores were calculated as the average of three trials. In the results, the $\mathrm{BERT}_{\mathrm{base}}$ was used as the base PLM if there is no explicit indication. For MAML, Proto, LEO and Induction, the document embeddings are formed as explained in + +![](images/d739439c69904425494b58e8533603318706c02369b75e702d831ae3e172a122.jpg) +Figure 2: 5-way 5-shot prediction accuracy depending on the number of top-most layers with adapters. + +5.1. For DS and Frog-GNN, we quoted the reported results from (Bao et al., 2019) and (Xu and Xiang, 2021), since the experiment settings are identical. We applied AMAL to a wide range of small to mid-sized PLMs: $\mathrm{BERT}_{\text {base }}$ (Devlin et al., 2019), $\mathrm{ALBERT}_{\text {base }}$ (Lan et al., 2019), $\mathrm{RoBERTa}_{\text {base }}$ (Liu et al., 2019), $\mathrm{BART}_{\text {base }}$ (Lewis et al., 2020) and DeBERTa (He et al., 2020) to verify the applicability of AMAL. For $\mathrm{BART}_{\text {base }}$ , we treated the decoder's final hidden state embedding of the last token as the [CLS] embedding, as in (Lewis et al., 2020). + +As shown in Table 1, AMAL outperforms the previous methods specialized for few-shot classification over all of the datasets by a large margin: $27.69\%$ in 5-way 1-shot classification and $22.03\%$ in 5-way 5-shot classification. The evaluation results demonstrate that AMAL offers agile adaptation of diverse small to mid-sized PLMs in the few-shot regime. + +![](images/a284ec87cdfa6779f8ec4ae0dda8455a6539dac2e3e108ba0004e5197646baba.jpg) +(a) + +![](images/827e046bfcd97e049fc5cfb3703b877b169c8f8c1f820a5b146bd68152c1135f.jpg) +(b) + +![](images/4fd12a65b331256fb73e91bd8ef553aa582e28b8beecff12ef6db998b8bc2011.jpg) +(c) + +![](images/b35668a1424fb787f815f4b52f1f45f3d1917e112398587613b6bdca095d62f9.jpg) +(d) + +![](images/4a3ebd71cb1ab70a9cc95b84a92e92996ed458482369f7c3e2d78716f303daa8.jpg) +(e) + +![](images/ad26ce42a20d92c78e94b615157e38448f494b4744722181ad7c0d161eeb44f4.jpg) +(f) +Figure 3: t-SNE plot of the embedding space before and after adaptation with 20 newsgroup. (a)-(c) exhibit it before the low rank adapter pooling process. (d)-(f) show the task-specific embedding space after the pooling process. (a), (d) the embedding for seven top-level macro domains. (b), (e) Same as (a) and (d) but highlighted for the classes under the science domain. (c), (f) Same as (a) and (d) but highlighted for the four classes under recreation domain. + +# 5.4 The Impact of the Number of AMAL-equipped Layers + +We explore the effect of the number of layers equipped with AMAL. Here, the $\mathrm{BERT}_{\mathrm{base}}$ is employed as the base PLM. We monitored performance while incrementally extending the number of AMAL-equipped layers, starting from the last layer and proceeding towards the input layer. As shown in Figure 2, giving priority to the top-most layers is an even more cost-effective way to apply AMAL. It is also evident that starting from the sixth or seventh layer from the top, the benefit of inserting AMAL into the next lower layer becomes insignificant. These results show that the performance of few-shot learning can be greatly improved even with a small number of adapters. According to this empirical analysis, we can maximize the efficiency in a fine-grained manner by adjusting the number of AMAL-equipped layers. + +# 5.5 Visualization of Task-Specific Document Embeddings + +We plot the initial document embeddings and the corresponding fine-tuned embeddings obtained by AMAL for 20 Newsgroups dataset (Figure 3). For the visualization, we randomly sampled four hun + +dred tasks, each of which is composed of 5-way 1-shot from all available classes. All of the embeddings were projected into 2-D space via t-SNE. Figures 3a, 3b, and 3c show the initial embeddings before the adaptation, and Figures 3d, 3e, and 3f exhibit their adapted embeddings. Figures 3a and 3d show the adapted embeddings for ‘atheism’, ‘computer’, ‘for-sale’, ‘recreation’, ‘science’, ‘religion’, and ‘talk’. Figures 3b and 3e show the topics for the ‘science’ domain. Figures 3c and 3f show the topics for the ‘recreation’ domain. + +# 6 Conclusion + +We hypothesized that language model adaptation can be performed on a low intrinsic rank, especially when only a few examples are offered. We designed a novel meta-learning-based low-rank adaptation method for leveraging small to mid-sized pretrained language models, allowing a new task to be cost-effectively learned in the few-shot regime. We demonstrated that the combination of low-rank matrix decomposition and meta learning is so effective, that we can reap the benefits of small to mid-sized pre-trained language models in practical scenarios with scarce annotated data. + +# 7 Limitations + +AMAL may be difficult to apply to unidirectional language models such as GPT2 (Radford et al., 2019) and GPT3 (Gao et al., 2021). This is because unidirectional models only encode the context that resides to the left of the [CLS] token in the input. + +# References + +Armen Aghajanyan, Luke Zettlemoyer, and Sonal Gupta. 2020. Intrinsic dimensionality explains the effectiveness of language model fine-tuning. arXiv preprint arXiv:2012.13255. +Yujia Bao, Menghua Wu, Shiyu Chang, and Regina Barzilay. 2019. Few-shot text classification with distributional signatures. In International Conference on Learning Representations. +Elad Ben Zaken, Yoav Goldberg, and Shauli Ravfogel. 2022. BitFit: Simple parameter-efficient fine-tuning for transformer-based masked language-models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 1-9, Dublin, Ireland. Association for Computational Linguistics. +Jonathan Bragg, Arman Cohan, Kyle Lo, and Iz Beltagy. 2021. Flex: Unifying evaluation for few-shot nlp. Advances in Neural Information Processing Systems, 34:15787-15800. +Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4171-4186. +Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. In International Conference on Machine Learning, pages 1126-1135. PMLR. +Chin-Lun Fu, Zih-Ching Chen, Yun-Ru Lee, and Hung-yi Lee. 2022. Adapterbias: Parameter-efficient token-dependent representation shift for adapters in nlp tasks. arXiv preprint arXiv:2205.00305. +Tianyu Gao, Adam Fisch, and Danqi Chen. 2021. Making pre-trained language models better few-shot learners. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), + +pages 3816-3830, Online. Association for Computational Linguistics. +Ruiying Geng, Binhua Li, Yongbin Li, Xiaodan Zhu, Ping Jian, and Jian Sun. 2019. Induction networks for few-shot text classification. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3904-3913. +Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2020. Deberta: Decoding-enhanced bert with disentangled attention. In International Conference on Learning Representations. +Ruining He and Julian McAuley. 2016. Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering. In *proceedings of the 25th international conference on world wide web*, pages 507-517. +S. K. Hong and Tae Young Jang. 2022. Lea: Meta knowledge-driven self-attentive document embedding for few-shot text classification. In North American Chapter of the Association for Computational Linguistics. Association for Computational Linguistics. +Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019a. Parameter-efficient transfer learning for nlp. In International Conference on Machine Learning, pages 2790-2799. PMLR. +Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin de Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019b. Parameter-efficient transfer learning for NLP. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings of Machine Learning Research, pages 2790-2799. PMLR. +Edward J Hu, Phillip Wallis, Zeyuan Allen-Zhu, Yanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. 2021. Lora: Low-rank adaptation of large language models. In International Conference on Learning Representations. +Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learning of language representations. In International Conference on Learning Representations. +Ken Lang. 1995. Newsweeder: Learning to filter netnews. In Machine Learning Proceedings 1995, pages 331-339. Elsevier. +David D. Lewis. 1997. Reuters-21578, distribution 1.0. + +David D Lewis, Yiming Yang, Tony Russell-Rose, and Fan Li. 2004. Rcv1: A new benchmark collection for text categorization research. Journal of machine learning research, 5(Apr):361-397. +Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871-7880. +Xiao Liu, Kaixuan Ji, Yicheng Fu, Weng Tam, Zhengxiao Du, Zhilin Yang, and Jie Tang. 2022. P-tuning: Prompt tuning can be comparable to fine-tuning across scales and tasks. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 61-68. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. +Rishabh Misra and Jigyasa Grover. 2021. *Sculpting Data for ML: The first act of Machine Learning*. +Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9. +Andrei A Rusu, Dushyant Rao, Jakub Sygnowski, Oriol Vinyals, Razvan Pascanu, Simon Osindero, and Raia Hadsell. 2018. Meta-learning with latent embedding optimization. In International Conference on Learning Representations. +Timo Schick and Hinrich Schütze. 2020. It's not just size that matters: Small language models are also few-shot learners. arXiv preprint arXiv:2009.07118. +Jake Snell, Kevin Swersky, and Richard S Zemel. 2017. Prototypical networks for few-shot learning. arXiv preprint arXiv:1703.05175. +Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip HS Torr, and Timothy M Hospedales. 2018. Learning to compare: Relation network for few-shot learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1199-1208. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30. +Shiyao Xu and Yang Xiang. 2021. Frog-gnn: Multiperspective aggregation based graph neural network for few-shot text classification. Expert Systems with Applications, 176:114795. + +Tianyi Zhang, Felix Wu, Arzoo Katiyar, Kilian Q Weinberger, and Yoav Artzi. 2020. Revisiting few-sample bert fine-tuning. arXiv preprint arXiv:2006.05987. + +# A Appendix + +# A.1 Datasets + +We introduce the datasets and the split (i.e., train/val/test) which had been maintained in our experiments. + +20 Newsgroups is a collection of discourses in newsgroup posts for 20 topics (Lang, 1995). + +Huffpost Headlines is a collection of news headlines published in the Huffington Post from 2012 to 2018 (Misra and Grover, 2021). It is composed of 41 topics. + +Reuters-21578 is composed of documents that appeared on the Reuters newswire in 1987 (Lewis., 1997). In addition, we adopted the ApteMod version and discarded the documents with more than one label to avoid ambiguity, and thus 31 classes remain. + +RCV-1 is a set of newswire stories published by Reuters journalists from 1996 to 1997 (Lewis et al., 2004) and comprises 71 topic classes. + +Amazon data is a real-world dataset collected from Amazon.com as a set of customer reviews from 24 types of product categories(He and McAuley, 2016). + +To train and evaluate the models, we divided each of the aforementioned datasets into a meta-training set $(\mathcal{S}^{tr})$ , meta-validation set $(\mathcal{S}^{val})$ , and meta-test set $(\mathcal{S}^{test})$ as disjoint sets of classes within the experimental setting. In this work, we followed the same split of classes as in (Bragg et al., 2021) for all datasets. + +# A.2 Implementation Details + +The Table 2 specifies the detailed architecture of AMAL. In the encoder module, the [CLS] vector, which is the same size of the output of the BERT-base-uncased, ALBERT-base, and RoBERTa-base is linearly transformed into a 64-dimensional vector. The relation network module is a two-layers neural network with the ReLU activation. The decoder module is a single-layer neural network with the ReLU. Finally, the classifier is a single-layer neural network with the ReLU. + +Table 2: Architecture details of AMAL + +
Module NameArchitectureShape of (input, output)The number of Params
Encoderlinear(768, 64)153.6K
Relation Network2-layer MLP with ReLUFirst layer (2X64, 2X64)ReLUSecond layer (2X64, 2X64)ReLU32.8K
Decoder1-layer MLP with ReLUInput layer : (64, 150)ReLUOutput layer : (150, 154)32.7K
Task Classifier1-layer MLP with ReLUInput layer : (768, 300)ReLUOutput layer : (300,5)231.9K
+ +# A.3 Training Details and Hyperparameter Tuning + +We summarize the details of the model training and evaluation in Table 3. "# of tasks" means the number of tasks in each batch during model training. For example, for the 20 newsgroups dataset, the 20 classes of news topics are split into 8 classes for meta-training, 5 classes for meta-validation, and 7 for meta-testing. When composing a batch for meta-training, since # of tasks is 4, the following is repeated 4 times: the 5 classes (since our few-shot setup is 5-way) for a task are randomly selected from the given 8 classes. + +In table 3, "# of queries" indicates the number of data points for a class, where the data points are used for the calculation of the meta-optimization loss and the accuracy in the outer-loop of the meta-training and meta-testing step, respectively. During the meta-training, we sample four tasks with 15 queries from $S^{tr}$ , hence performing the low rank adapter pooling four times per meta-optimization. + +Early-stopping was employed during model training: model training was stopped if the validation loss did not improve for 20 steps. For both the validation and testing, we sample 15 tasks with 15 queries from $S^{val}$ and $S^{test}$ . We used the Adam optimizer with learning rates of 0.1 and 0.001 in the inner and outer updates, and the inner update is repeated 40 times. During the meta-optimization process (outer loop), we apply weight decay scheduling. In addition, the coefficient $\lambda$ of the KL-Divergence term in eq. 4 was set to 0.001 and the coefficient $\gamma$ of the penalty term in eq. 4 was set to 0.1. We performed all the experiments on a single NVIDIA A100 80GB GPU. + +Table 3: Hyperparameters for Model Training + +
Hyperparameters
meta-training set# of tasks4
# of queries15
meta-validation set# of tasks15
# of queries15
meta-test set# of tasks15
# of queries15
learning rates in inner loop0.1
learning rates in outer loop0.001
schedule steps5
λ, weight of KL-Divergence (eq. 4)0.001
γ, weight of latent variable penalty (eq. 4)0.1
number of adaptation steps40
+ +Table 4: Few-shot Classification Performance of Adapters + +
AmazonHuffpostRCV1Reuters20 newsgroup
Freeze25.33 %30.67 %17.33 %26.67 %29.33 %
Full fine-tuning25.33 %29.33 %18.67 %28.00 %32.00 %
Adapter (Houlsby et al., 2019b)30.67 %28.00 %18.67 %25.33 %26.67 %
BitFit (Ben Zaken et al., 2022)26.67 %28.00 %22.67 %28.00 %26.67 %
AdapterBias (Fu et al., 2022)29.33 %28.00 %12.00 %29.33 %25.33 %
LoRA (Hu et al., 2021)28.00 %28.00 %17.33 %26.67 %26.67 %
+ +# A.4 Few-Shot Performance of Adapter-based Fine-tuned Methods + +In addition, to verify the validity of our assumption that is introduced in section 1, we find the performance of parameter efficient fine-tuned methods, i.e., (Houlsby et al., 2019b; Ben Zaken et al., 2022; Fu et al., 2022; Hu et al., 2021), using 5-way 5-shot. As shown in Table 4, we cannot find the meaningful results in a few-shot settings with parameter efficient fine-tuned methods. + +# A.5 The Number of Fine-tuning Parameters of Adapters + +The Table 5 shows the number of parameters of the Adapter-based fine-tuning methods: the original Adapter (Houlsby et al., 2019b), BitFit (Ben Zaken et al., 2022), Adapter-Bias (Fu et al., 2022), LoRA (Hu et al., 2021) and AMAL(Ours) on the + +Table 5: The number of the fine-tuning parameters of Adapters + +
# of fine-tuned params.
Adapter(Houlsby et al., 2019b)1.23M
BitFit(Ben Zaken et al., 2022)0.10M
Adapter-Bias(Fu et al., 2022)12K
LoRA(Hu et al., 2021)0.294M
AMALOurs0.595M
+ +Table 6: The performance for 5-way 5-shot depending on the bottleneck size and the rank size. + +
bottleneck size(mrank(r)accuracy
323289.42%
1687.64%
888.89%
646488.36%
3288.27%
1689.69%
888.44%
12812888.44%
6488.09%
3290.31%
1688.89%
889.60%
+ +$\mathrm{BERT}_{\mathrm{base}}$ (12 layers). Here, AMAL's latent embedding space is set to 64 dimensions. As revealed, AMAL requires the smallest amount of fine-tuning parameters. + +# A.6 The effect of the bottleneck size and the rank size + +We observed the effect of the two hyper-parameters, each of which is the adapter size and its rank, respectively. The base language model is the BERTbase. In the experiment, we changed the bottleneck size as 32, 64, 128 and their related, diverse rank sizes on the Amazon product reviews data. The table 6 shows the performance for the 5-way 5-shot. It is observed that the adaptation of language models can be settled on a low intrinsic dimension as mentioned in (Hu et al., 2021) and (Aghajanyan et al., 2020). \ No newline at end of file diff --git a/amalmetaknowledgedrivenfewshotadapterlearning/images.zip b/amalmetaknowledgedrivenfewshotadapterlearning/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..99264b5ab91b415a557e4066c02a40e794c2004b --- /dev/null +++ b/amalmetaknowledgedrivenfewshotadapterlearning/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:11f81ec6b962dbf9f32fffca068e59c030ef31a75857b18d584cbb5e58aa96dd +size 491816 diff --git a/amalmetaknowledgedrivenfewshotadapterlearning/layout.json b/amalmetaknowledgedrivenfewshotadapterlearning/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..aa2cf88c363f521da24d07f5a4209530c4c495fe --- /dev/null +++ b/amalmetaknowledgedrivenfewshotadapterlearning/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:53bb41bf3b1cb1a0335f26b0a1e3218816676566ef8070f7c7e159bff41fac84 +size 357901 diff --git a/amultifacetedframeworktoevaluateevasioncontentpreservationandmisattributioninauthorshipobfuscationtechniques/a2a95f90-6cf8-4b8b-90bb-7afe2b942a6b_content_list.json b/amultifacetedframeworktoevaluateevasioncontentpreservationandmisattributioninauthorshipobfuscationtechniques/a2a95f90-6cf8-4b8b-90bb-7afe2b942a6b_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..653ecff75011b99cb050b1d9a1b4729e27c5502c --- /dev/null +++ b/amultifacetedframeworktoevaluateevasioncontentpreservationandmisattributioninauthorshipobfuscationtechniques/a2a95f90-6cf8-4b8b-90bb-7afe2b942a6b_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:39b3357e3145bd2849340ab25749d2df640c27e63a9c49188997287552d7d443 +size 111002 diff --git a/amultifacetedframeworktoevaluateevasioncontentpreservationandmisattributioninauthorshipobfuscationtechniques/a2a95f90-6cf8-4b8b-90bb-7afe2b942a6b_model.json b/amultifacetedframeworktoevaluateevasioncontentpreservationandmisattributioninauthorshipobfuscationtechniques/a2a95f90-6cf8-4b8b-90bb-7afe2b942a6b_model.json new file mode 100644 index 0000000000000000000000000000000000000000..ee3d642bc66659b682cb88ab81f78ae895ce5a8d --- /dev/null +++ b/amultifacetedframeworktoevaluateevasioncontentpreservationandmisattributioninauthorshipobfuscationtechniques/a2a95f90-6cf8-4b8b-90bb-7afe2b942a6b_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b0d02f2a38a4e76495f49ecafd99a9e732b0c46454206784d11acd42b6c149a7 +size 128646 diff --git a/amultifacetedframeworktoevaluateevasioncontentpreservationandmisattributioninauthorshipobfuscationtechniques/a2a95f90-6cf8-4b8b-90bb-7afe2b942a6b_origin.pdf b/amultifacetedframeworktoevaluateevasioncontentpreservationandmisattributioninauthorshipobfuscationtechniques/a2a95f90-6cf8-4b8b-90bb-7afe2b942a6b_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..00277b26a08b23d4633a16e1f8c68867aa008656 --- /dev/null +++ b/amultifacetedframeworktoevaluateevasioncontentpreservationandmisattributioninauthorshipobfuscationtechniques/a2a95f90-6cf8-4b8b-90bb-7afe2b942a6b_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:67ffa9052f72a2be0e928b413e99e2ccb404a371ef8662ccfe8ea65cce37cc2e +size 342125 diff --git a/amultifacetedframeworktoevaluateevasioncontentpreservationandmisattributioninauthorshipobfuscationtechniques/full.md b/amultifacetedframeworktoevaluateevasioncontentpreservationandmisattributioninauthorshipobfuscationtechniques/full.md new file mode 100644 index 0000000000000000000000000000000000000000..511fba728816308d06fb60be7c374d8b60f97b3c --- /dev/null +++ b/amultifacetedframeworktoevaluateevasioncontentpreservationandmisattributioninauthorshipobfuscationtechniques/full.md @@ -0,0 +1,444 @@ +# A Multifaceted Framework to Evaluate Evasion, Content Preservation, and Misattribution in Authorship Obfuscation Techniques + +Malik H. Altakrori + +School of Computer Science + +McGill University / Mila + +Montreal, Canada + +malik.altakrori@mail.mcgill.ca + +Benjamin C. M. Fung + +School of Information Studies + +McGill University / Mila + +Montreal, Canada + +ben.fung@mcgill.ca + +Thomas Scialom + +Meta AI + +Paris, France + +tscialom@meta.com + +Jackie Chi Kit Cheung + +School of Computer Science + +McGill University / Mila + +Montreal, Canada + +jcheung@cs.mcgill.ca + +# Abstract + +Authorship obfuscation techniques have commonly been evaluated based on their ability to hide the author's identity (evasion) while preserving the content of the original text. However, to avoid overstating the systems' effectiveness, evasion detection must be evaluated using competitive identification techniques in settings that mimic real-life scenarios, and the outcomes of the content-preservation evaluation have to be interpretable by potential users of these obfuscation tools. Motivated by recent work on cross-topic authorship identification and content preservation in summarization, we re-evaluate different authorship obfuscation techniques on detection evasion and content preservation. Furthermore, we propose a new information-theoretic measure to characterize the misattribution harm that can be caused by detection evasion. Our results reveal key weaknesses in state-of-the-art obfuscation techniques and a surprisingly competitive effectiveness from a back-translation baseline in all evaluation aspects. + +# 1 Introduction + +Authorship obfuscation is the task of masking the writing style of an author of a document to prevent authorship identification techniques from using stylistic patterns to reveal the author's identity (Kacmarcik and Gamon, 2006). The motivation for this task is to protect the public from the misuse of authorship identification techniques to suppress freedom of speech or to persecute whistle-blowers. + +When a new authorship obfuscation technique is proposed, it is crucial to compare its effectiveness to state-of-the-art obfuscation tools in settings that + +accurately depict the real-life environment where such a tool may be used. One important assumption that should be made is that a de-anonymizer is likely to use the most competitive authorship identification tool available to identify the author of a document. Using an inferior or brittle authorship attribution technique, or a weak obfuscation baseline will overstate the performance of such obfuscation tools. For example, previous work on obfuscation has evaluated obfuscation techniques against an identification tool that uses the exact same features and classifier that were used to obfuscate it in the first place (Mahmood et al., 2019). This could lead to misleadingly high impression of the system's effectiveness. + +Similarly, obfuscation techniques must convey the same intended message both before and after obfuscation. Therefore, when a new obfuscation tool is proposed, it is evaluated on both the quality of obfuscation and its ability to preserve the content. With the recent development in language models and their ability to generate text, many automatic measures have been introduced to evaluate the quality of this generated text (Novikova et al., 2017), and some of these measures have been used in obfuscation techniques. + +The problem with existing content-preservation measures, however, is that they only provide an abstract, numerical score that limits the user's ability to pinpoint the part of the text that suffered from loss of information and requires re-modification. Recently, question answering-based approaches were proposed and shown to provide meaningful feedback in the form of questions that tell the user which information in the text has been changed and in which part (Durmus et al., 2020). + +The concept of evasion in authorship obfusca + +tion has a critical, potential harmful side-effect that we raise for the first time. In a classification setting, the typical setting in which evasion of obfuscation techniques are evaluated, a classifier has to pick one author from the set of candidate authors. An obfuscation technique may obfuscate a document by imitating another potential author, in effect unfairly "framing" another person. To investigate this behavior, we use an information-theoretic approach to evaluate the potential for misattribution of the obfuscating technique. Specifically, we propose a new evaluation measure; namely, misattribution harm where the goal is to characterize the confidence in the attribution algorithm rather than its output. + +In this work, we highlight a number of issues with the existing work on obfuscation with respect to two dimensions: obfuscation effectiveness, and content preservation. We further propose a new evaluation dimension namely, misattribution. Our key contributions are the following: + +- We show that a carefully selected baseline can outperform state-of-the-art obfuscation techniques. +- We use question answering as an evaluation measure for content preservation instead of token- and embedding-based approaches. +- Using information theory, we conduct a detailed analysis of the harming effect of misattributing a document to a different author to achieve detection evasion. + +# 2 Background + +Authorship obfuscation (Brennan et al., 2012) techniques aim to hide an author's writing style which can be used by authorship identification tools to reveal the true identity of that author. Here, the assumption is that the author has already taken the precautions to hide their identity by removing any identifying information such as their name or address from the text. By using obfuscation techniques, users aim to hide their writing habits which may or may not be known to them. With that in mind, it is important that the obfuscated text contains the same conveyed message after obfuscation. + +# 2.1 Obfuscation + +Obfuscation tools can be divided into two groups: generic off-the-shelf tools, and application-specific obfuscation tools. + +Generic Tools. Examples of off-the-shelf tools include machine translation and data augmentation approaches (Mansoorizadeh et al., 2016). These tools have been adapted for the purpose of generating a slightly modified version of a document. Commonly, these tools are used as baselines to be compared against obfuscation-specific techniques (Brennan et al., 2012; Keswani et al., 2016) because they are easy to use, require no further training or extra data from the user, and need minimal knowledge about the obfuscation process. Table 1 is an example of these tools where translating a sentence into different languages and then back to the original one creates a modified version of the original sentence. + +
TextLanguage
How is it going bro-En
Wie Goes es dir, Bruder?EnDe
Comment vas-tu mon frère?DeFr
How are you my brother?FrEn
+ +Table 1: Back translation is a technique used to paraphrase a sentence by translating it to different languages and then back to original language. + +When machine translation approaches were initially used, only statistical machine translation (SMT) methods such as Moses (Koehn et al., 2007) and Google's previous Translate API (Wu et al., 2016) were available. They were shown to suffer from low obfuscation effectiveness and generated text with poor linguistic fluency compared to obfuscation techniques. In contrast, more recent neural machine translation approaches are able to generate higher-quality translations compared to SMT approaches according to some evaluation metrics. This development warrants re-evaluating their performance, especially as both Brennan et al. (2012) and Keswani et al. (2016) used SMT approaches. + +In this work, we use well-tuned baselines that are expected to be competitive with obfuscation techniques as opposed to using simple and primitive ones. An example of excluded baselines is Random Replacement which tries to obfuscate a document by replacing words in that document with a random word from the author's vocabulary set, or with a synonym from a dictionary. Such baselines have been explored heavily in the literature and are known for their poor obfuscation performance and incoherent output. + +Obfuscation-specific Tools. By contrast, obfuscation tools are built specifically to hide the author's identity and are tested against state-of-the-art authorship attribution techniques. While these tools require further training and/or additional data, they have been shown to be more effective than generic tools. + +In this work, we evaluate two different approaches that focus specifically on obfuscation. Mutant-X (Mahmood et al., 2019) is a genetic algorithm that utilizes GloVE (Pennington et al., 2014) word embeddings to replace words in a document with similar ones to create a modified version of a document. This technique requires knowledge of the authorship attribution classifier, specifically, the probability of each author, to do the obfuscation. + +Heuristic Obfuscation Search (Bevendorff et al., 2019, 2020) was initially developed as an imitation approach to obfuscation. The algorithm requires a target author profile which is the tri-grams frequency and the goal is to generate a document with a similar author profile. This is a rule-based approach where changes to the text—based on different rules—are associated with costs, and the goal is to generate a document with high similarity to the target profile with the minimum cost; i.e., by making the smallest number of changes. + +There exists another category of approaches where the obfuscation is done on the feature representation of the document, e.g., the $n$ -gram vector representation, and not on the actual document. This category of obfuscation is used to protect the identity of the author while performing another task, such as sentiment analysis. Since the original text remains intact, we consider the literature on this category out of the scope of this work. An example of this work is Weggenmann and Kerschbaum (2018). + +Finally, neural-based, obfuscation-specific approaches, e.g., (Emmery et al., 2018; Bo et al., 2021), are still deemed impractical for the authorship obfuscation domain where researchers would attribute this impracticality to the lack of large training datasets which these neural approaches require (Bevendorff et al., 2020). + +# 2.2 Identification + +As mentioned earlier, it is important to use a state-of-the-art authorship identification approach to evaluate evasion in authorship obfuscation. In the authorship attribution domain, it is well + +established that a cross-topic authorship identification tool should have a realistic performance that mimics real-life applications (Goldstein-Stewart et al., 2009; Sundararajan and Woodard, 2018; Stamatatos, 2017, 2018; Custódio and Paraboni, 2019; Barlas and Stamatatos, 2020, 2021; Altakrori et al., 2021). Because of that, we use a state-of-the-art (Altakrori et al., 2021) cross-topic, authorship identification technique to evaluate evasion of obfuscation techniques namely, (Stamatatos, 2018). + +# 2.3 Evaluating Content Preservation + +Evaluating content preservation in text is important even if we value safety (Potthast et al., 2016). This is because people want to maintain their privacy while sharing their opinions freely. Besides obfuscation, content evaluation techniques are applicable to other NLG tasks such as machine translation and summarization, and these techniques fall within one of three groups. + +Token-based evaluation metrics depend on token overlap between a source and a target document. Examples of these metrics are METEOR (Banerjee and Lavie, 2005), BLEU (Papineni et al., 2002), and ROUGE-L (Lin, 2004). While these metrics were among the early ones to be used, they have been shown to have a lower correlation with human scores for fact preserving in text summarization (Maynez et al., 2020; Honovich et al., 2021). + +With recent advances in representation learning, particularly in word and sentence embeddings, new model-based metrics were adopted where a smaller change in the sentence embedding indicates higher content preservation. Examples of such metrics are the Universal Sentence Encoder (USE) (Cer et al., 2018) and BERTScore (Zhang et al., 2020). + +More recently, the summarization community proposed a new, question-answering-based approach to evaluate the content preservation in summarization. The argument for this work is that the content is considered preserved if we can give the same answer to a particular question both before and after summarization. Examples of this work are (Wang et al., 2020) and (Scialom et al., 2021). Here, using such a system, providing feedback to the users of obfuscation techniques would become easier since the unanswered questions and the spans from which the questions are taken can be shown. + +# 3 A Multifaceted Evaluation Framework + +Ideally, authorship obfuscation should only modify the author's writing style in a document while retaining all the original information. However, due to the topic-writing style entanglement, modifying the document is likely to cause information loss; i.e., some content is not preserved. Based on that, obfuscation techniques are evaluated in two dimensions: evasion, and content preservation. + +In the following subsections, we formally describe obfuscation, evasion and content preservation and we discuss the state of the tools used to evaluate them. Finally, we propose a novel evaluation dimension to characterize a potential side effect of a successful detection evasion namely, misattribution. + +# 3.1 Obfuscation + +Let $d$ be a document written by author $a^*$ . To hide their identity, $a^*$ uses an obfuscation technique $O: d \to \hat{d}$ that takes a document $d$ as an input, modifies it, and outputs an obfuscated version of this document, namely, $\hat{d}$ such that $d \neq \hat{d}$ . + +For example, suppose we have a document $d$ , where $d =$ "The decision caused the team a big loss!", which was written by author $a^* = Q$ . Next, "Q" uses an obfuscation technique $O$ that modifies the document $d$ by changing it to $\hat{d}$ , where $O(d) = \hat{d} =$ "The advice caused the team a huge loss". + +# 3.2 Evading Detection + +We use an authorship identification technique to evaluate the performance of authorship obfuscation. If the identification technique was able to identify the original author before obfuscation but failed to identify that author after obfuscation then the obfuscated document has evaded detection. + +The evaluation process is as follows. We start by training and tuning an authorship identification tool on the training and validation documents, respectively. Then, we record the identification accuracy on the original test documents. Next, we use an obfuscation technique to modify the test documents to hide the authors' writing styles in theses document. Finally, without further training/finetuning to the identification tool, we measure the authorship identification performance on the obfuscated test documents. The effectiveness of an obfuscation technique is quantified by the difference in identification performance before and after + +obfuscation over all the test documents in the investigated dataset. + +Formally, let $I$ be an authorship identification technique $I:(d,T)\to a_i$ that takes a document $d$ and a set of candidate authors of this document $T$ as input, and outputs $a_{i}$ as the most plausible author of this document $d$ . Let $T = [a_{1},a_{2},\ldots ,a_{n}]$ and $n = |T|$ . We say that author $a^*$ has evaded detection using the obfuscation tool $O$ if $I(d,T) = a*$ , $I(O(d),T) = a_{i}$ , where $a^{*}\neq a_{j}$ ; and $a^{*},a_{j}\in T$ . Note that, if $I(d,T)\neq a*$ then $d$ does not require obfuscation against $I$ . + +To evaluate the obfuscation performance over a whole test dataset $D$ , let $S:(a_i,a^*)\to \mathbb{Z}\in [0,1]$ be an indicator function given by Eq. 1. Finally, let Accuracy = $\sum_{i}^{m}S(I(d_{i},T),a_{i}^{*}) / m$ where $m = |D|$ is the number of test documents. + +$$ +S \left(a _ {i}, a ^ {*}\right) = \left\{ \begin{array}{l l} 1, & \text {i f} a _ {i} = a ^ {*} \\ 0, & \text {o t h e r w i s e} \end{array} \right\} \tag {1} +$$ + +Continuing from the example in Sec. 3.1, let $T$ be ["G", "Q", "B", "M", "W"], the predicted author before obfuscation, i.e., $I(d, T)$ be $Q$ , and the predicted author after obfuscation, i.e., $I(O(d), T)$ be $G$ . Here, the obfuscation tool $O$ has evaded detection successfully. + +# 3.3 Preserving the Content + +After evaluating evasion, content preservation is evaluated to investigate whether loss of information has occurred due to obfuscation. An authorship obfuscation technique should maximize the content preservation, or equally minimize the loss of information. After evaluation, the result of this evaluation is communicated to the author to decide whether to accept the obfuscation outcome, or reject it if the information loss is drastic. + +Formally, let $P:(d,O(d))\to \mathbb{R}$ be a content-preservation evaluation tool that takes an original document $d$ and an obfuscated document $O(d)$ as input, compares their content and outputs a content-preservation score that represents the amount of information preserved from the original document $d$ after obfuscation. + +For example, suppose that the content-preservation tool of choice is based on the word-level, uni-grams overlap between the original document $d$ , and the obfuscated document $O(d)$ , where $d =$ "The decision caused the team a big loss!" and $O(d) =$ "The advice caused the team a huge loss". Suppose that splitting $d$ , and $O(d)$ into + +word-level uni-grams yields ["The", "decision", "caused", "the", "team", "a", "big", "loss!] and ["The", "advice", "caused", "the", "team", "a", "huge", "loss"], respectively. Here, the goal is to maximize content-preservation score where $P:(d,O(d)) = 5$ . + +# 3.4 Fairness, and the Potential of Misattribution Harm + +In real-life applications of authorship identification, misattribution can have severe outcomes. For example, if the obfuscated text is a threatening message, then it is important to identify the real culprit to avoid persecuting an innocent person. + +Confidence in the identification outcome is a core concept in authorship identification that has not been emphasized in the obfuscation literature. + +Instead of imitating a writing style of one of the candidate authors, an obfuscation technique may output a generic writing style that is difficult to attribute to a specific author. In that case, the identification technique will still provide a candidate author, but its confidence in this output would be low. As a result, if an obfuscation technique can lower the confidence of an identification method, then its outcome will be in a position of doubt, hence, neither the original author nor the identified one will have to suffer. + +Formally, let $C: (\mathrm{I}, \mathrm{d}, \mathrm{T}) \to \mathbb{R}^n$ be a tool that takes as input an authorship identification tool $I$ , a document $d$ , and a set of candidate authors for that document $T$ and outputs a probability distribution over the candidate authors $[c_1, c_2, \ldots, c_n]$ , where $c_i = P_I(a_i|d)$ is the likelihood of author $a_i$ being the original author of document $d$ when authorship identification tool $I$ is used, $0 \leq c_i \geq 1$ , and $\sum_{i}^{n} c_i = 1$ . + +In this work, we consider the model's confidence to be a high when the probability distribution for the one author is much higher compared to the other authors. For example, a model is the most confident when it predicts author $A_{t}$ with probability 1. In contrast, a model is the least confident, or rather clueless, when the probability distribution over all the authors is uniform, i.e. when the probability of each author is $\frac{1}{T}$ , where $T$ is the number of authors. + +Note that the model can predict the wrong author and have high confidence in its prediction. Luckily, this confidence can be measured by computing the entropy (Eq. 2) for the attribution model, and the effect of misattribution harm can be characterized + +by the difference in entropy before and after obfuscating a document $(d)$ . While there exists a number of approaches to measure the difference between the two entropy values, such as cross-entropy or KL-divergence, we chose the difference in entropy for simplicity. Other measures could potentially be explored in future work. + +$$ +H (X) = - \sum_ {t = 1} ^ {n} P \left(x _ {t}\right) \log_ {2} P \left(x _ {t}\right) \tag {2} +$$ + +Furthermore, this approach can provide a more fine-grained measure of performance than the identification accuracy. For example, let us assume that the attribution model had to identify the most plausible authors from a set of three authors: $a_1$ , $a_2$ , and $a_3$ . Before obfuscation, The model correctly identifies $a_2$ as the most plausible author with the maximum confidence in its prediction, i.e., the probability distribution over the authors was 1 for $a_2$ and 0 otherwise. + +After obfuscation using technique A, the model identifies $a_1$ as the most plausible authors, i.e., $a_2$ has successfully evaded detection. The model, however, outputs a probability distribution of 0.7 for $a_2$ , 0.2 for $a_1$ and 0.1 $a_3$ with a high confidence in its prediction. Alternatively, after obfuscation using technique B, the identification model also identifies $a_1$ as the most plausible author, outputs a probability distribution of 0.4 for $a_2$ , 0.3 for $a_1$ and finally, 0.3 $a_3$ . + +Clearly, both techniques generated the same author prediction, and so, both techniques evaded detection. However, technique B would be considered better because it caused the attribution model to have lower confidence in its prediction. + +# 4 Experimental Setup + +Our overall evaluation procedure is as follows. We started by establishing the authorship identification accuracy on the original datasets. Note that, the training and testing split is predefined for each dataset as shown in Table 2. For validation, however, we shuffled the training set and took $20\%$ of the samples for validation. + +We followed that by creating different obfuscated copies of the test sets, one for each obfuscation technique. Next, we evaluated the detection evasion and misattribution on each obfuscated copy in one step. We concluded our evaluation with content preservation. We provide below the details of each step separately. + +# 4.1 Corpora + +For this work, we use two different corpora: the Extended Brennan-Greenstadt Corpus (EBG) dataset (Brennan et al., 2012) and the Reuters Corpus Volume 1 (RCV1) (Teahan, 2000; Khmelev, 2000; Kukushkina et al., 2001), commonly referred to as C50 dataset. For each dataset, we use two authors configurations: five authors, and 10 authors. We provide corpus statistics in Table 2. + +
C50EBG
Authors510510
Training set
Docs7515055110
Docs / authors:15151111
Avg. doc len (W)478452496494
Avg. doc len (C)3007286131573120
Testing set
Docs7515055110
Docs / authors:151576
Avg. doc len (W)480479496497
Avg. doc len (C)3032303630683046
Total docs15030090169
+ +Table 2: Corpus statistics. (Doc: Document, W: Words, C: Characters) numbers are reported using the rounded mean. SD reported in the appendix, in Table 8) + +# 4.2 Authorship Obfuscation + +The evasion performance of an obfuscation technique is compared to a set of baselines as well as state-of-the-art obfuscation techniques. Here, the role of a baseline is to set a lower bound on the performance while requiring little knowledge about the problem, and a fairly low effort to use. + +In this work, we use a neural machine translation model in the back-translation baseline to replace statistical models that were used in previous studies (Brennan et al., 2012; Keswani et al., 2016). Additionally, we use a contextual language model namely, BERT to replace words based on their context, instead of replacing them with synonyms or random words from the author's vocabulary set. + +Back Translation (BT) uses Facebook's many-to-many translation model (El-Kishky et al., 2020; Fan et al., 2021; Schwenk et al., 2021) implemented by the HuggingFace (Wolf et al., 2020) library. This model has two advantages. Firstly, it is open-source and its results can be replicated in contrast to commercial translation products that are costly and can be replaced at any time. + +Secondly, this model translates between languages directly without using English as a reference/pivot language. Many of the existing neural machine translation models use English as a pivot language where translation is done either from English or to English. For example, if the task is the translate from French to Chinese, one has to translate from French to English, then from English to Chinese. This approach defeats the whole point of multi-hop translation where the goal is to use the differences between languages in phrasing the same idea to change the writing style of a sentence. + +# Lexical Substitution Using BERT (LSB) + +(Mansoorizadeh et al., 2016) masks random words in a sentence, then use BERT language model to replace these words with ones that fit the context. + +Mutant-X (Mahmood et al., 2019) replaces words based on their GloVE word embeddings given that the candidate replacement has the same sentiment. This technique requires knowledge of the authorship attribution classifier, specifically, the probability of each author, to do the obfuscation. + +Heuristic Obfuscation Search (A*) (Bevendorff et al., 2019) was originally developed as an imitation approach to obfuscation. The algorithm requires a target author profile which is the tri-grams frequency. This rule-based approach changes the text while incurring costs, and the goal is to generate a document with a high similarity to a target profile with minimum cost. + +# 4.3 Authorship Identification + +For authorship identification, we use the state-of-the-art (Altakrori et al., 2021) cross-topic, authorship identification technique to evaluate evasion of obfuscation techniques namely, Masking (Stamatatos, 2018). The main idea of this approach is to mask words in a document, where masking is done by replacing the characters in the word with asterisks, then use word- or character-level $n$ -grams to represent as features. The choice of which words are masked is based on the hyperparameter $k^3$ . In a document, any word that is not in the $k$ -most frequent words in the British National Corpus (BNC) must be masked. After masking and extracting the $n$ -gram features, a Support Vector Machines (SVM) with linear kernel is used as a classifier. + +
Anon. Tech.EBG datasetC50 dataset
5 Authors Acc. (%)Diff.10 Authors Acc.Diff.5 Authors Acc.Diff.10 Authors Acc.Diff.
None (Original text)96.4-77.6-76.0-67.3-
A*93.52.971.16.572.04.064.03.3
Back Translation84.012.464.213.473.32.765.32.0
Lexical Sub (BERT)91.54.978.4-0.876.0067.30
Mutant-X86.410.073.64.074.71.366.70.6
+ +Table 3: Obfuscation performance characterized by the change in the identification accuracy (Acc. %) using word masking and character $n$ -grams as features, and a linearSVM classifier. (Diff. is the difference between the identification accuracy for the original text and the accuracy after obfuscation. (Lower identification accuracy (higher difference) is better. A negative sign means the accuracy increased instead of decreasing. Bold: best result per column.) + +# 4.4 Content Preservation + +To evaluate the content preservation, we chose the EBG dataset with the ten authors configuration. From all the original test documents and the four obfuscated versions, we randomly selected $10\%$ of the documents. These documents were split into sentences, and a sentence was included or excluded from the evaluation samples based on a coin flip. This resulted in 212 sampled sentences, an average of 42 sentences per obfuscation techniques. To avoid cherry-picking samples that favor one metric vs. another, we did not exclude any of the sampled sentences. However, we discuss the consequence of this in the results section below. + +To evaluate content preservation of these samples, we used HuggingFace implementation for both token-based and model-based evaluation tools. For the question answering approach, we used (Scialom et al., 2021) that generated the questions from the original document instead of needing a reference4. + +In brief, we used BLEU (Papineni et al., 2002), ROUGE-1, 2, and L (Lin, 2004), METEOR (Banerjee and Lavie, 2005), BERTScore (Zhang et al., 2020), and QuestEval (Scialom et al., 2021) to evaluate content preservation. For example, for BLEU, we consider the obfuscation text as the translation of the original text and we report the average BLEU score over the 212 sampled sentences. + +Appendix A.6 provides various obfuscated examples for each obfuscation technique. With these examples, we inspect what was each technique good at, and what did it fail to do. A more detailed study on the types of errors made by obfuscation techniques can be found in (Gröndahl and Asokan, 2020). + +# 4.5 Characterizing Misattribution + +As described in Sec. 3.4, we calculate the change in entropy before and after training. We follow the same training procedure that was used for identification. However, instead of using the authors' probabilities to find the most likely author we calculate the entropy for that output distribution. + +Finally, we normalize the entropy scores to make them comparable with other content-preservation scores that are bounded between zero and 1. To do that, we divide the entropy scores by the entropy of the uniform distribution with K authors, where K is the number of authors in each dataset. + +# 5 Experimental Results + +# 5.1 Evaluating Evasion + +As mentioned earlier in Sec. 3.2, the successful evasion of an obfuscation technique is measured by the drop in authorship identification accuracy after obfuscation. In Table 3, the first row shows the identification accuracy on the original test documents, i.e., before obfuscation. The rows below it show the identification accuracy after obfuscating the test documents. Here, the lower the attribution accuracy after obfuscation the better is an obfuscation algorithm at evading detection. + +We make the following observations from Table 3. First, despite being a baseline, back translation outperforms both obfuscation techniques on the EBG dataset, and comes as a close second on the C50 dataset after A* of Bevendorff et al.. Contrast to the literature, back translation is not a weak baseline anymore. + +The other general observation that we make is that identifying the original author—even without obfuscation—becomes much harder as the number of candidate author increases. Specifically, as the number of authors increased from five authors + +
Anon. Tech.Rouge-1Rouge-2Rouge-LBLEUMETEORBERTSc.QuestE
None (original text5)1.0000.9811.0000.9811.0001.0000.678
A*0.9060.8580.9060.7660.8670.9660.582
Back Translation0.7040.4710.6810.3120.7220.9580.620
Lexical Sub (BERT)0.8480.6960.8450.5930.8440.9650.599
Mutant-X0.9020.8140.9020.7460.9150.9760.555
+ +Table 4: Content Preservation scores on 212 sampled sentences from the EBG-10 dataset. (The first row is the score for the original text. Higher is better. Bold is for the maximum value per column) + +
Anon. Tech.EBGC50
5 Authors Ent.Diff10 Authors Ent.Diff5 Authors Ent.Diff10 Authors Ent.Diff
None (Original text)73.9-83.3-79.4-84.3-
A*78.8+4.986.8-3.579.0+0.484.4-0.1
Back Translation82.3-8.488.8-5.583.1-3.787.3-3.0
Lexical Sub (BERT)80.5-6.684.6-1.380.2-0.885.3-1.0
Mutant-X72.6+1.385.2-1.982.0-2.683.2+1.1
+ +Table 5: Characterizing the misattribution using the normalized entropy score (%) (Ent. is the normalized entropy score. Diff. is the difference between the entropy score for the original text and the entropy after obfuscation. Higher entropy (Lower diff) is better. Bold is the best value per column.) + +to ten authors, the authorship attribution accuracy dropped by around $\% 20$ and $\% 10$ on the EBG and the C50 dataset, respectively. We conduct more analysis on the robustness of the evaluated obfuscation techniques in Sec. 6. Specifically, we use different identification techniques, with various writing style features, and report the results of this analysis in Table 6. + +# 5.2 Content Preservation + +Table 4 shows the result of content preservation using various evaluation metrics. Naturally, $\mathrm{A}^*$ has the best performance on the token-based metrics given that most of the modifications are done at the character level, i.e., has lower tendency to change words. Similarly, Mutant-X has the highest model-best scores because words are replaced based on their embeddings. + +Conversely, back translation has the worst scores in both token-based and model-based measures. In contrast, it has the closest score to the original text using the QA-based approach which, as mentioned earlier, has better correlation with human scores than token-based and model-based metrics. + +We manually investigated the quality of sentences that were obfuscated using the back translation technique (See Appendix A.6). In these sentences, one can see that back translation rephrases the sentence and maintains the original content despite using different tokens. This is a clear indication that the QA-based approach is more trustworthy to measure the content preservation than + +the commonly used token-based approaches. + +# 5.3 Characterizing Unfair Misattribution Using Entropy + +Table 5 shows the normalized entropy scores $^6$ that are used to characterize unfair misattribution. The higher the normalized entropy, the closer the probability distribution of the predicted authors is to the uniform distribution. In that case, the model has no preference for one particular author, or has low confidence in its outcome. Back translation has the best performance on this evaluation metric, measured by the increase in normalized entropy from that before obfuscation (the first row). Our interpretation is that by translating to different languages, back translation is generating text in a generic style that is hard to attribute to one particular author. In contrast, A* tries to imitate a specific author's writing style to avoid detection, while Mutant-X requires a set of candidate authors and a classifier to do the obfuscation, and only stops when the obfuscated text is attributed to a different author. + +# 6 Analysis of Robustness + +In this section, we conduct a battery of tests on different attribution features. The goal of this study + +
Identification technique
Stylo.N-gramsMaskingAverage
EBG5Anon. tech.-POSCh.W.Ch.W.
No Anon.81.290.091.596.488.896.490.7 ± 5.2
A*51.378.091.594.676.493.580.9 ± 15.1
Back Translation59.767.389.796.483.184.080.0 ± 12.7
Lexical Sub (BERT)68.280.289.796.495.391.586.9 ± 9.9
Mutant-X81.284.791.596.495.586.489.3 ± 5.6
EBG 10No Anon.58.853.973.375.153.177.665.3 ± 10.3
A*47.837.274.271.851.771.159.0 ± 14.1
Back Translation46.243.566.373.259.564.258.8 ± 10.7
Lexical Sub (BERT)55.052.071.573.758.778.464.9 ± 10.1
Mutant-X55.052.274.177.052.373.664.0 ± 11.0
C505No Anon.65.368.084.084.061.376.073.1 ± 8.9
A*65.366.781.385.358.772.071.6 ± 9.2
Back Translation64.065.382.781.358.773.370.9 ± 9.0
Lexical Sub (BERT)65.368.085.388.062.776.074.2 ± 9.7
Mutant-X60.062.784.084.054.774.770.0 ± 11.6
C50 10No Anon.58.769.369.364.053.367.363.6 ± 5.9
A*56.769.368.061.354.764.062.3 ± 5.4
Back Translation55.364.765.362.052.065.360.8 ± 5.2
Lexical Sub (BERT)56.770.768.762.054.767.363.4 ± 6.0
Mutant-X56.067.369.362.752.766.762.4 ± 6.1
+ +Table 6: Obfuscation performance using different sets of features with a Support Vector Machines classifier. The colored row represents the identification accuracy on the original text. + +is to characterize the obfuscation performance under different types of writing style features that vary between stylometric features and content features. The results are shown in Table 6. As can be shown, the performance of obfuscation techniques varies drastically based on the choice of obfuscation technique. Because of that, it is important to evaluate a proposed technique against authorship identification techniques with different feature representations. + +# 7 Conclusion + +In this work, we demonstrated the importance of using state-of-the-art evaluation tools to measure the performances of authorship obfuscation techniques. In addition, our experiments revealed that current obfuscation techniques have key weaknesses and have been outperformed by a baseline, namely back translation in multiple evaluation aspects. Furthermore, we identified a critical issue with respect to the fairness of obfuscation techniques. Our proposed misattribution measure investigates the side-effect of a successful detection evasion by identifying another author as the most plausible author + +of the obfuscated text. As a result, we argue that an attack on the confidence of the identification model, by generating text in a generic style would confuse the identification model and make it unusable in real-life applications. Finally, we argue that evaluation of authorship obfuscation tools should follow the rapidly evolving domain of evaluation tools while keep the potential users and real-life applications when developing and evaluating novel obfuscation techniques. + +# Acknowledgments + +We would like to thank the reviewers for their valuable discussion during the rebuttal period. + +The first author is supported by the Doctoral Scholarship from Fonds de Recherche du Quebec Nature et Technologies (FRQNT-275545). In addition, this research is supported in part by the Discovery Grants (RGPIN-2018-03872) and CREATE Grants (CREATE-554764-2021) from the Natural Sciences and Engineering Research Council of Canada, and Canada Research Chairs Program (950-230623). The fourth author is supported by a Canada CIFAR AI Chair. + +# 8 Limitations + +One potential limitation of this work is that obfuscation can be misused in a similar way that authorship identification can be misused. However, it is important that the public be aware of the existence of such tools, and for researchers to have better obfuscation techniques to raise the bar for identification techniques. Another limitation is that we could have used more datasets in our analysis. We note that, our results —particularly, where a baseline outperforms state-of-the-art obfuscation techniques—would still be interesting regardless off the number of datasets. In addition, all the datasets for authorship obfuscations similar characteristics in terms of size. + +Another potential limitation of this work is the lack of human evaluation for content preservation. While question answering approaches have been shown to correlate well with human evaluation scores for factual consistency, it would have been interesting to analyze the cases when such techniques fail. In particular, what type of errors do such techniques make? For example, do these techniques produce ungrammatical sentences or generate grammatical but nonsensical sentences. + +# References + +Malik Altakrori, Jackie Chi Kit Cheung, and Benjamin C. M. Fung. 2021. The topic confusion task: A novel evaluation scenario for authorship attribution. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 4242-4256, Punta Cana, Dominican Republic. Association for Computational Linguistics. +Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pages 65-72, Ann Arbor, Michigan. Association for Computational Linguistics. +Georgios Barlas and Efstathios Stamatos. 2020. Cross-domain authorship attribution using pretrained language models. In Artificial Intelligence Applications and Innovations, pages 255-266, Cham. Springer International Publishing. +Georgios Barlas and Efstathios Stamatos. 2021. A transfer learning approach to cross-domain authorship attribution. *Evolving Systems*, pages 1-19. +Janek Bevendorff, Martin Potthast, Matthias Hagen, and Benno Stein. 2019. Heuristic authorship ob + +fuscation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1098-1108, Florence, Italy. Association for Computational Linguistics. +Janek Bevendorff, Tobias Wenzel, Martin Potthast, Matthias Hagen, and Benno Stein. 2020. On divergence-based author obfuscation: An attack on the state of the art in statistical authorship verification. it-Information Technology, 62(2):99-115. +Haohan Bo, Steven H. H. Ding, Benjamin C. M. Fung, and Farkhund Iqbal. 2021. ER-AE: Differentially private text generation for authorship anonymization. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3997-4007, Online. Association for Computational Linguistics. +Michael Brennan, Sadia Afroz, and Rachel Greenstadt. 2012. Adversarial stylometry: Circumventing authorship recognition to preserve privacy and anonymity. ACM Transactions on Information and System Security (TISSEC), 15(3):1-22. +Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, et al. 2018. USE: Universal sentence encoder for english. In Proc. of the 2018 conference on empirical methods in natural language processing: system demonstrations (EMNLP), pages 169-174. +Jose Eleandro Custódio and Ivandre Paraboni. 2019. An ensemble approach to cross-domain authorship attribution. In International Conference of the Cross-Language Evaluation Forum for European Languages, pages 201-212. Springer. +Esin Durmus, He He, and Mona Diab. 2020. FEQA: A question answering evaluation framework for faithfulness assessment in abstractive summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5055-5070, Online. Association for Computational Linguistics. +Ahmed El-Kishky, Vishrav Chaudhary, Francisco Guzmán, and Philipp Koehn. 2020. CCAligned: A massive collection of cross-lingual web-document pairs. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5960-5969, Online. Association for Computational Linguistics. +Chris Emmery, Enrique Manjavacas Arevalo, and Grzegorz Chrupał. 2018. Style obfuscation by invariance. In Proceedings of the 27th International Conference on Computational Linguistics, pages 984-996, Santa Fe, New Mexico, USA. Association for Computational Linguistics. +Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep + +Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, et al. 2021. Beyond english-centric multilingual machine translation. Journal of Machine Learning Research, 22(107):1-48. +Jade Goldstein-Stewart, Ransom Winder, and Roberta Sabin. 2009. Person identification from text and speech genre samples. In Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009), pages 336-344, Athens, Greece. Association for Computational Linguistics. +Tommi Gröndahl and N Asokan. 2020. Effective writing style transfer via combinatorial paraphrasing. Proc. Priv. Enhancing Technol., 2020(4):175-195. +Or Honovich, Leshem Choshen, Roee Aharoni, Ella Neeman, Idan Szpektor, and Omri Abend. 2021. $q^2$ : Evaluating factual consistency in knowledge-grounded dialogues via question generation and question answering. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7856-7870, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. +Gary Kacmarcik and Michael Gamon. 2006. Obfuscating document stylometry to preserve author anonymity. In Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions, pages 444-451, Sydney, Australia. Association for Computational Linguistics. +Yashwant Keswani, Harsh Trivedi, Parth Mehta, and Prasenjit Majumder. 2016. Author masking through translation. In CLEF (Working Notes), pages 890-894. +Dmitry V Khmelev. 2000. Disputed authorship resolution through using relative empirical entropy for markov chains of letters in human language texts. Journal of quantitative linguistics, 7(3):201-207. +Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondřej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions, pages 177-180, Prague, Czech Republic. Association for Computational Linguistics. +Olga V Kukushkina, Anatoly A Polikarpov, and Dmitry V Khmelev. 2001. Using literal and grammatical statistics for authorship attribution. *Problems of Information Transmission*, 37(2):172-184. +Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics. + +Asad Mahmood, Faizan Ahmad, Zubair Shafiq, Padmini Srinivasan, and Fareed Zaffar. 2019. A girl has no name: Automated authorship obfuscation using mutant-x. Proceedings on Privacy Enhancing Technologies, 2019(4):54-71. +Muharram Mansoorizadeh, Taher Rahgooy, Mohammad Aminiyan, and Mahdy Eskandari. 2016. Author obfuscation using wordnet and language models—notebook for pan at clef 2016. In CLEF 2016 Evaluation Labs and Workshop-Working Notes Papers, pages 5–8. +Joshua Maynez, Shashi Narayan, Bernd Bohnet, and Ryan McDonald. 2020. On faithfulness and factuality in abstractive summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1906-1919, Online. Association for Computational Linguistics. +Jekaterina Novikova, Ondrej Dusek, Amanda Cercas Curry, and Verena Rieser. 2017. Why we need new evaluation metrics for NLG. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2241-2252, Copenhagen, Denmark. Association for Computational Linguistics. +James O'Shea. 2013. Alphabetical order 277 word new function word list. https://semanticsimilarity.files.wordpress.com/2013/08/jim-oshea-fwlist-277.pdf. [Retrieved Oct. 2019]. +Kishore Papineni, Salim Roukos, Todd Ward, and Wei Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. +Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics. +Martin Potthast, Matthias Hagen, and Benno Stein. 2016. Author obfuscation: Attacking the state of the art in authorship verification. In CLEF (Working Notes), pages 716-749. +Holger Schwenk, Guillaume Wenzek, Sergey Edunov, Edouard Grave, Armand Joulin, and Angela Fan. 2021. CCMatrix: Mining billions of high-quality parallel sentences on the web. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6490-6500, Online. Association for Computational Linguistics. + +Thomas Scialom, Paul-Alexis Dray, Sylvain Lamprier, Benjamin Piwowarski, Jacopo Staiano, Alex Wang, and Patrick Gallinari. 2021. QuestEval: Summarization asks for fact-based evaluation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6594-6604, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. + +Efstathios Stamatatos. 2017. Authorship attribution using text distortion. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 1138-1149, Valencia, Spain. Association for Computational Linguistics. + +Efstathios Stamatos. 2018. Masking topic-related information to enhance authorship attribution. Journal of the Association for Information Science and Technology, 69(3):461-473. + +Kalaivani Sundararajan and Damon Woodard. 2018. What represents "style" in authorship attribution? In Proceedings of the 27th International Conference on Computational Linguistics, pages 2814-2822, Santa Fe, New Mexico, USA. Association for Computational Linguistics. + +W. J. Teahan. 2000. Text classification and segmentation using minimum cross-entropy. In Content-Based Multimedia Information Access - Volume 2, RIAO '00, page 943-961. Le Centre de Hautes Etudes Internationales D'Informatique Documentaire, Paris, FRA. + +Alex Wang, Kyunghyun Cho, and Mike Lewis. 2020. Asking and answering questions to evaluate the factual consistency of summaries. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5008-5020, Online. Association for Computational Linguistics. + +Benjamin Weggenmann and Florian Kerschbaum. 2018. Syntf: Synthetic and differentially private term frequency vectors for privacy-preserving text mining. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, SIGIR 2018, Ann Arbor, MI, USA, July 08-12, 2018, pages 305-314. ACM. + +Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics. + +Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, + +Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. ArXiv preprint, abs/1609.08144. + +Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. *Bertscore: Evaluating text generation with BERT.* In *8th International Conference on Learning Representations*, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. + +# A Appendices + +# A.1 Hardware and Runtime + +The experiments for this paper where run on a workstation with one GPU type Quadro RTX 8000, with four CPUs and 32GB of RAM. Run time (estimated by wandb.com) is as follows. + +1. Obfuscation run-time: $\sim 10$ days, that is $\sim 256$ Hrs total. +2. Authorship identification run time: $\sim 0.6$ day, that is $\sim 14.5$ Hrs total. + +# A.2 Hyperparameters + +Table 7 shows the ranges of hyperparameters that were used for Masking (the main identification technique) and the other writing style features that were used in the ablation study in Table 6. + +
HyperparameterRange
k100, 200, 300, 400, 500, and
1000, 2000, 3000, 4000, 5000
ft5, 10, 15, 20, 25, 30, 35, 40, 45, 50
nch3, 4, 5, 6, 7, 8
nw1, 2, 3
epochs2, 5
vocab_size2000, 5000
+ +Table 7: Hyperparameters for masking and $n$ -gram based feature representations. $k$ is the threshold for masking, $n_w$ is the word-level and POS $n$ -grams, $n_{ch}$ is the character-level $n$ -gram, and $f_t$ is the minimum frequency threshold in the whole dataset. + +# A.3 Corpus Statistics, with Mean and SD + +See Table 8. + +# A.4 Misattribution Harm + +Table 9 shows the normalized entropy scores (with SD) while Table 10 shows the identification accuracy on the left of the table and unnormalized entropy scores to characterize the misattribution behavior. The goal of this table is to show that raw entropy scores are less intuitive than the normalized values bound between zero and one. + +# A.5 Stylometric Features + +Table 11 shows details of the static, stylometric features that were used in the ablation study. + +
C50EBG
Authors510510
Training set
Docs7515055110
Docs / authors:15(0.0)15(0.0)11(0.0)
Avg. doc Len (W)478(46.4)452(60.8)496(6.1)
Avg. doc Len (C)3007(273.1)2861(366.9)3157(24.0)
Testing set
Docs7515055110
Docs / authors:15(0.0)15(0.0)7(4.0)
Avg. doc Len (W)480(86.2)479(77.6)496(14.1)
Avg. doc Len (C)3032(567.2)3036(473.9)3068(102.7)
Total docs15030090169
+ +Table 8: Corpora statistics. (Mean and SD) + +
Anon. Tech.EBGC50
5 Authors10 Authors5 Authors10 Authors
None (Original text)73.9 ± 4.483.3 ± 3.879.4 ± 6.884.3 ± 2.9
A*78.8 ± 4.986.8 ± 4.079.0 ± 6.584.4 ± 2.1
Back Translation82.3 ± 4.388.8 ± 2.183.1 ± 4.887.3 ± 3.1
Lexical Sub (BERT)80.5 ± 4.584.6 ± 2.580.2 ± 6.685.3 ± 2.4
Mutant-X72.6 ± 4.785.2 ± 2.882.0 ± 5.483.2 ± 2.6
+ +Table 9: Characterizing the misattribution using the normalized entropy score $(\%)$ + +
Anon. T.Identification accuracy (%)Entropy
EBGC50EBGC50
5 Au.10 Au.5 Au.10 Au.5 Au.10 Au.5 Au.10 Au.
None96.477.676.067.31.72 ± 0.12.77 ± 0.11.84 ± 0.22.80 ± 0.1
A*93.571.172.064.01.83 ± 0.12.88 ± 0.11.83 ± 0.12.81 ± 0.1
Back T.84.064.273.365.31.91 ± 0.12.95 ± 0.11.93 ± 0.12.90 ± 0.1
LS BERT91.578.476.067.31.87 ± 0.12.81 ± 0.11.86 ± 0.22.84 ± 0.1
Mutant-X86.473.674.766.71.69 ± 0.12.83 ± 0.11.90 ± 0.12.76 ± 0.1
+ +Table 10: Identification accuracy (left) and Misattribution harm (right) characterized by raw entropy score. + +
Lexical Features - Character-LevelLexical Features - Word-Level
1. Characters count (N)1. Tokens count (T)
2. Ratio of digits to N2. Average sentence length (in characters)
3. Ratio of letters to N3. Average word length (in characters)
4. Ratio of uppercase letters to N4. Ratio of alphabets to N
5. Ratio of tabs to N5. Ratio of short words to T (a short word has a length of 3 characters or less)
6. Frequency of each alphabet (A-Z), ignoring case (26 features)6. Ratio of words length to T. Example: 20% of the words are 7 characters long. (20 features)
7. Frequency of special characters: <%|{}7. Ratio of word types (the vocabulary set) to T
[ ]\@# +-*=$^ & _()’ (24 features).
Syntactic Features
1. Frequency of Punctuation: , .? ! : ; ’ ” (8 features)
2. Frequency of each function words (O'Shea, 2013) (277 features)
+ +Table 11: List of stylometric features. + +# A.6 Qualitative Analysis + +In this section, we provide examples for each obfuscation system, and comment on what each one does. Note that the examples were cherry-picked in order to highlight the different issues in each approach. + +Tables 12 to 15 provide examples for A*, Mutant-X, back translation, and lexical substitution with BERT, respectively. A more detailed study on categorizing the types of errors made by an obfuscation technique can be found in (Gröndahl and Asokan, 2020). + +
OriginalThe decline of the Kongo due to a series of wars with the Portuguese in the seventeenth century,
ModifiedThe decrease of the Kongo due to ab polynomial of wars with the Portuguese in the seventeenth week,
OriginalThe continued fragmentation of its nationalist movements set Angola apart from other Portuguese colonies.
ModifiedThe continued fragmentation of its nationalist movements set Angola apart from other Portuguese colonies!
OriginalThe oppressive tropical climate and hostile African neighbors made life difficult for settlers, many of whom lacked agricultural experience or expertise.
ModifiedThe oppersss tropical control and troops African neighbors made life difficult fsettlers, many of whom lacked agricultural bonanza ro xpertise.
ChangesWord replacement, punctuation replacement, flipping characters and introducing typos.
ObservationIn some cases, the replaced words fit the context to some extent. In other cases, the new words were completely out of context. This is mainly because word replacement did not consider the whole sentence but rather the word to be replaced.
+ +Table 12: Obfuscated examples generated using the A* obfuscation technique (Bevendorff et al., 2019) + +
Original +ModifiedProtect personal information with the MyID identity theft monitoring solution. +Protect personal info with the MyID identity theft monitoring solution.
Original +ModifiedThe possibility that Internet users will be able to hide what they do from the ubiquitous +ad tracking is a big win for consumers concerned with Internet privacy. +The prospect that Internet users will be able to hide what they do from the ubiquitous +ad tracking is a big victorious for consumers concerned with Internet privacy.
Original +ModifiedThe other example was that of a woman who had fallen and broken her arm. +The other example was that of a schoolgirl who had fallen and broken her arm.
ChangesControlled word replacement that is based on the sentiment of the word to be replaced.
ObservationSimilar to other techniques that use word replacement, sometimes the replaced word +either have the wrong part of speech, or changes the meaning. It does better than naive +word replacement techniques because of the added rules on candidate words.
+ +Table 13: Obfuscated examples generated using Mutant-X (Mahmood et al., 2019) + +
Original +ModifiedSome of the relevant items with regard to maintaining and strengthening health systems include:Neither side purposely disrupted health systems during the conflict. +Some of the relevant points regarding the preservation and strengthening of health systems are: no side that targeted health systems during the conflict.
Original +ModifiedZimbabwe lost over two thirds of their physicians in the 1990s. +Zimbabwe lost more than two-thirds of its doctors in the 1990s.
Original +ModifiedThe initial reasons for United States intervention in Angola were primarily economic. +The initial reasons for the U.S. intervention in Angola were mainly economic.
Original +ModifiedTwo important US officials in Luanda, Robert W. Hultslander, the CIA station chief, and Tom Killoran, the American Consul General, agreed that ... +Two main U.S. officials in Luanda, Robert W. Hultslander, the CIA head of state, and Tom Killoran, the U.S. consulate, accepted that ...
Original +ModifiedOver half of Cuba's doctors left during the revolution. +More than half of the Cuban doctors were abandoned during the revolution.
Original +ModifiedSince 1961, the US had been supporting Holden Roberto with a modest stipend of $10,000 a year. +Since 1961, the United States has supported Holden Roberto with a modest stock exchange of $10,000 per year.
ChangesWord replacement, rephrasing sentences, contracting/expanding acronyms, and adding spaces.
ObservationBack translation is a powerful text generation tool. Rephrasing a sentence implicitly replaces some words with synonyms that fit the context, and in some cases changes the grammatical structure of the sentence as well. In addition, expanding an acronym, e.g., replacing US with United States, or vise versa or adding proper spacing might hide some writing habits of the author.
+ +Table 14: Obfuscated examples generated using back translation (Fan et al., 2021) + +
OriginalDu Mortier and Coninx describe their use of MHUs with the International Committee of the Red Cross during the conflict in Columbia in 2005.
ModifiedDu Mortier and Coninx describe his use of MHUs with the International Committee of the white Cross during the conflict in maryland in 2005.
OriginalThey are generally expensive, however, and the fact that they only provide services intermittently tends to affect when they are appropriate for use.
ModifiedThey are not expensive, however, so the fact that they can provide services intermittently tends toward affect when they are appropriate for use.
OriginalSloppy dressers generally look as if they slept in the clothes they are wearing.
ModifiedSloppy dressers generally look as if they slept on the clothes they are wearing.
ChangesWord replacement.
ObservationAs shown in the examples, word replacement, even when the context is considered, sometimes lead to choosing the wrong word. Here, the chosen word fits the context, i.e. sensible, but changes the meaning compared to the original sentence.
+ +Table 15: Obfuscated examples generated using Lexical Substitution using BERT (Mansoorizadeh et al., 2016) \ No newline at end of file diff --git a/amultifacetedframeworktoevaluateevasioncontentpreservationandmisattributioninauthorshipobfuscationtechniques/images.zip b/amultifacetedframeworktoevaluateevasioncontentpreservationandmisattributioninauthorshipobfuscationtechniques/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..028b6e6a41a4032304a87c768d6c6029e4b7a825 --- /dev/null +++ b/amultifacetedframeworktoevaluateevasioncontentpreservationandmisattributioninauthorshipobfuscationtechniques/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:28094d719c46914fa48ee4cec951e7a53ed3543a989e3ea845cf678fd06e6d1c +size 1335888 diff --git a/amultifacetedframeworktoevaluateevasioncontentpreservationandmisattributioninauthorshipobfuscationtechniques/layout.json b/amultifacetedframeworktoevaluateevasioncontentpreservationandmisattributioninauthorshipobfuscationtechniques/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..1318c1297043e40ed19c4f66bdcdd6652a3695a2 --- /dev/null +++ b/amultifacetedframeworktoevaluateevasioncontentpreservationandmisattributioninauthorshipobfuscationtechniques/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d6180656a3afe644436f926935c3c6f0215f17d6833c64e3e1e1eabdd767c3ab +size 473116 diff --git a/amultilingualperspectivetowardstheevaluationofattributionmethodsinnaturallanguageinference/7129bf85-d043-4555-b14d-94d0e41ebc29_content_list.json b/amultilingualperspectivetowardstheevaluationofattributionmethodsinnaturallanguageinference/7129bf85-d043-4555-b14d-94d0e41ebc29_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..35a0ee3cc9a641cdd0176be8cc965ebfb7f796f6 --- /dev/null +++ b/amultilingualperspectivetowardstheevaluationofattributionmethodsinnaturallanguageinference/7129bf85-d043-4555-b14d-94d0e41ebc29_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0fb5cbc99d137b6b751ea9fe5f47a8e8a9f5c0afc3ee55f28d1350c3c81f361e +size 137963 diff --git a/amultilingualperspectivetowardstheevaluationofattributionmethodsinnaturallanguageinference/7129bf85-d043-4555-b14d-94d0e41ebc29_model.json b/amultilingualperspectivetowardstheevaluationofattributionmethodsinnaturallanguageinference/7129bf85-d043-4555-b14d-94d0e41ebc29_model.json new file mode 100644 index 0000000000000000000000000000000000000000..eda5bf8fa34474ca70b0213a3ce5a29df924064d --- /dev/null +++ b/amultilingualperspectivetowardstheevaluationofattributionmethodsinnaturallanguageinference/7129bf85-d043-4555-b14d-94d0e41ebc29_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f57fb21e68548798cb57d22dec7610f849785ed9daae073f6bf6f1d8c6430157 +size 162050 diff --git a/amultilingualperspectivetowardstheevaluationofattributionmethodsinnaturallanguageinference/7129bf85-d043-4555-b14d-94d0e41ebc29_origin.pdf b/amultilingualperspectivetowardstheevaluationofattributionmethodsinnaturallanguageinference/7129bf85-d043-4555-b14d-94d0e41ebc29_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..2c565ee095f3c498778073eb83ac88a56fb26b24 --- /dev/null +++ b/amultilingualperspectivetowardstheevaluationofattributionmethodsinnaturallanguageinference/7129bf85-d043-4555-b14d-94d0e41ebc29_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9189b9b1549b58f745a86d4ffb781033a92b265e996b6679518f34e306354a8a +size 6566480 diff --git a/amultilingualperspectivetowardstheevaluationofattributionmethodsinnaturallanguageinference/full.md b/amultilingualperspectivetowardstheevaluationofattributionmethodsinnaturallanguageinference/full.md new file mode 100644 index 0000000000000000000000000000000000000000..d37bc0fa779808f6f44a1dfb9b0b80c16911b371 --- /dev/null +++ b/amultilingualperspectivetowardstheevaluationofattributionmethodsinnaturallanguageinference/full.md @@ -0,0 +1,512 @@ +# A Multilingual Perspective Towards the Evaluation of Attribution Methods in Natural Language Inference + +Kerem Zaman* + +UNC Chapel Hill + +kzman@cs.unc.edu + +Yonatan Belinkov† + +Technion - Israel Institute of Technology + +belinkov@technion.ac.il + +# Abstract + +Most evaluations of attribution methods focus on the English language. In this work, we present a multilingual approach for evaluating attribution methods for the Natural Language Inference (NLI) task in terms of faithfulness and plausibility. First, we introduce a novel cross-lingual strategy to measure faithfulness based on word alignments, which eliminates the drawbacks of erasure-based evaluations. We then perform a comprehensive evaluation of attribution methods, considering different output mechanisms and aggregation methods. Finally, we augment the XNLI dataset with highlight-based explanations, providing a multilingual NLI dataset with highlights, to support future exNLP studies. Our results show that attribution methods performing best for plausibility and faithfulness are different. + +# 1 Introduction + +The opaqueness of large pre-trained models like BERT (Devlin et al., 2019) and GPT (Radford and Narasimhan, 2018) motivates developing explanation methods (Wallace et al., 2020), which aim to attribute importance to particular input features (Springenberg et al., 2015; Bach et al., 2015; Ribeiro et al., 2016; Sundararajan et al., 2017), such as words in a textual input. Two main criteria for evaluating such methods are plausibility and faithfulness (Jacovi and Goldberg, 2020). Plausibility can be defined as the consistency between explanations and human expectations, while faithfulness is defined as the consistency between explanations and the model's underlying decision-making process. + +Prior evaluations of attributions along these dimensions (Atanasova et al., 2020; DeYoung et al., + +2020; Ding and Koehn, 2021) suffer from several limitations. First, they have been limited in the range of considered attribution methods and the mechanism of calculating the attributions. Second, standard faithfulness evaluations, such as erasure-based ones (DeYoung et al., 2020), entail running the model on examples outside of the training distribution (Bastings and Filippova, 2020). Third, prior plausibility evaluations are limited to English-only datasets due to the lack of multilingual datasets with highlighted explanations. + +In this work, we aim to fill these gaps. Our main contribution is a new framework for evaluating the faithfulness of attribution methods. Inspired by Jacovi and Goldberg (2020)'s criterion for faithful explanations as giving similar explanations for similar inputs, we propose to use cross-lingual sentences (translations) as similar inputs. Given a multilingual model, we argue that faithful attributions should point to words that are aligned in two translations of the same sentence. This approach avoids out-of-distribution inputs by utilizing cross-lingual sentences as naturally occurring input perturbations. + +We focus on Natural Language Inference (NLI) as a case study, since it is a central task that has been widely used as a test bed for attribution methods (Atanasova et al., 2020; DeYoung et al., 2020; Jain and Wallace, 2019; Kim et al., 2020; Wegreffe and Marasovic, 2021; Prasad et al., 2021). We compare eight attribution methods, including different mechanisms of computation varying the output and the aggregation of input feature importance scores. + +First, we experiment with the cross-lingual XNLI dataset (Conneau et al., 2018), multilingual BERT (Devlin et al., 2019), and XLM-R (Conneau et al., 2020), and discover large differences in the faithfulness of different attribution methods. Second, we find that certain attributions are more plausible and that the choice of computation mechanism has a large effect in some cases. As far as we know, this is the first comprehensive study in + +vestigating the effect of different types of outputs when evaluating attributions. + +Informed by our comprehensive evaluation, we augment the multilingual XNLI dataset (Conneau et al., 2018) with highlight-based explanations by extracting highlights for the English part of XNLI and projecting along word alignments to other languages. We perform a plausibility evaluation with the resulting dataset, which we dub e-XNLI, and perform a human evaluation on a subset of the dataset to validate its adequacy. + +Finally, when comparing the ranking of attribution methods by plausibility and faithfulness, we find that no single method performs best. Different methods have different pros and cons, and may therefore be useful in different scenarios. In summary, this work provides: + +- A novel faithfulness evaluation framework. +- A comprehensive evaluation of attribution methods, which may guide practitioners when applying such methods. +- A dataset containing explanations in multiple languages for the NLI task, which may support future multilingual exNLP studies. + +# 2 Background + +# 2.1 Properties for Evaluating Attributions + +Many properties have been defined to evaluate explanations with respect to different aspects, such as plausibility and faithfulness (Jacovi and Goldberg, 2020), sufficiency (DeYoung et al., 2020), stability and consistency (Robnik-Sikonja and Bohanec, 2018), and confidence indication (Atanasova et al., 2020). As two prominent ones, we focus on faithfulness and plausibility. + +# 2.1.1 Faithfulness + +Faithfulness is the measure of how much an interpretation overlaps with the reasoning process of the model. In other words, if the scores given by an attribution method are compatible with the decision process behind the model, the interpretation is considered faithful. Such compatibility may be instantiated in different ways. For instance, Ding and Koehn (2021) measure faithfulness through model consistency and input consistency. For model consistency, they compare attribution scores of a given model and its distilled version. For input consistency, they compare attribution scores of perturbed input pairs. + +Perturbing inputs or erasing parts of the input + +is a widely-used technique for faithfulness evaluation (Arras et al., 2017; Serrano and Smith, 2019; DeYoung et al., 2020; Ding and Koehn, 2021; Atanasova et al., 2020). The basic idea is to observe the effect of changing or removing parts of inputs on model output. For instance, if removing words with high attribution scores changes the model output, then the explanation is faithful. For these methods, the change in prediction score is usually assumed to be caused by deletion of the significant parts from the input. However, the main reason might be the out-of-distribution (OOD) inputs created by the perturbations (Bastings and Filippova, 2020). The dependence on perturbations that result in OOD inputs is the main drawback of common faithfulness evaluation methods. In Section 3 we propose a new evaluation that overcomes this drawback. + +# 2.1.2 Plausibility + +Plausibility is a measure of how much an explanation overlaps with human reasoning (Ding and Koehn, 2021). In particular, if an attribution method gives higher scores to the part of the inputs that affect the decision according to humans, then it is plausible. Typically, human-annotated highlights (parts of the input) are used for plausibility evaluation (Wiegreffe and Marasovic, 2021), which we also follow in this work. However, some recent studies use lexical agreement (Ding and Koehn, 2021), human fixation patterns based on eye-tracking measurements (Hollenstein and Beinborn, 2021), and machine translation quality estimation (Fomicheva et al., 2021). + +# 2.2 Overview of Attribution Methods + +In this work, we focus on the evaluation of local post-hoc methods, which provide explanations to the output of a model for a particular input by applying additional operations to the model's prediction (Danilevsky et al., 2020). Local post-hoc methods can be grouped into three categories: methods based on gradients, perturbations, or simplification (Atanasova et al., 2020). In gradient-based methods, the gradient of the model's output with respect to the input is used in various ways for calculating attribution scores on the input. Perturbation-based methods calculate attribution scores according to the change in the model's output after perturbing the input in different ways. Simplification-based methods simplify the model to assign attributions. For instance, LIME (Ribeiro et al., 2016) trains a + +simpler surrogate model covering the local neighborhood of the given input. Other post-hoc methods outside of these 3 categories (Kokhlikyan et al., 2020) include Layer Activation (Karpathy et al., 2015), which uses activations of each neuron in the output of a specific layer, and NoiseTunnel (Smilkov et al., 2017; Adebayo et al., 2018). + +The attribution methods we evaluate are: InputXGradient (Shrikumar et al., 2017), Saliency (Simonyan et al., 2014), GuidedBackprop (Springenberg et al., 2015), and IntegratedGradients (Sundararajan et al., 2017) as gradient-based methods; Occlusion (Zeiler and Fergus, 2014) and Shapley Value Sampling (Ribeiro et al., 2016) as perturbation-based; LIME (Ribeiro et al., 2016) as simplification-based; and Layer Activation (Karpathy et al., 2015). Details about these methods appear in Appendix B. + +# 2.3 Output Mechanisms and Aggregation Methods + +Most previous studies compute attributions when the output is the top prediction score. More formally, let $f(\mathbf{x}^{(i)})$ denote the output of a classification layer, where $x^{(i)}$ is $i$ -th instance of the dataset. Then, the score of the top predicted class can be expressed as $\max f(\mathbf{x}^{(i)})$ . We also compare with the case when the output is the loss value calculated with respect to the gold label. For the common cross-entropy loss, the loss output can be expressed as $y^{(i)}\log (f(\mathbf{x}^{(i)}))$ where $y^{(i)}$ is the gold label. Furthermore, some attribution methods, such as InputXGradient and Saliency, return importance scores for each dimension of each input word embedding, which need to be aggregated to obtain a single score for each word. While prior studies use different aggregation operations, namely mean and $L_{2}$ , we examine their effect exhaustively. + +Denote the importance score for the $k$ -th dimension of the $j$ -th word embedding of $\mathbf{x}^{(i)}$ as $u_{jk}^{(i)}$ . Then we obtain an attribution score per word, $\omega_{\mathbf{x}_j}^{(i)}$ , using mean aggregation as follows: + +$$ +\omega_ {\mathbf {x} _ {j}} ^ {(i)} = \frac {1}{d} \sum_ {k = 0} ^ {d} u _ {j k} ^ {(i)} \tag {1} +$$ + +where $d$ is the number of dimensions for the embedding. Similarly, we define the attribution score per word using $L_{2}$ aggregation as follows: + +$$ +\omega_ {\mathbf {x} _ {j}} ^ {(i)} = \sqrt {\sum_ {k = 0} ^ {d} \left(u _ {j k} ^ {(i)}\right) ^ {2}}. \tag {2} +$$ + +# 2.4 Natural Language Inference + +Natural Language Inference (NLI) is a well-established Natural Language Understanding (NLU) task where the objective is deciding the relation between given sentence pairs (Consortium et al., 1996; Condoravdi et al., 2003; Bos and Markert, 2005; Dagan et al., 2005; MacCartney and Manning, 2009; Poliak, 2020). When a sentence pair is given, namely a premise and a hypothesis, there are 3 possible outcomes: (i) premise entails hypothesis; (ii) premise and hypothesis contradict; or (iii) they are neutral. This setting makes the task suitable to be modeled as a text classification task. + +Although there are many human-annotated NLI datasets, we focus on the MNLI (Williams et al., 2018b), XNLI (Conneau et al., 2018) and e-SNLI (Camburu et al., 2018) datasets. MNLI is a collection of 433K sentence pairs from 10 genres of written and spoken English where pairs are labeled as entailment, contradiction or neutral. This dataset is also part of a general NLU benchmark called GLUE (Wang et al., 2018). XNLI is the crosslingual extension of the MNLI dataset in which sentence pairs from the validation and test sets of MNLI are translated into 15 languages. The e-SNLI dataset is the enhanced version of SNLI (Bowman et al., 2015a), an English-only NLI dataset having the same format as MNLI, with human-annotated explanations in the form of highlights. + +# 3 Faithfulness + +# 3.1 Evaluation Methods + +# 3.1.1 Crosslingual Faithfulness Evaluation + +In faithfulness evaluation, erasure-based methods examine the drop in prediction scores by removing the important tokens from the input (Section 2.1.1). However, the drop in the prediction scores may be the result of the altered, out-of-distribution inputs (Bastings and Filippova, 2020). To overcome this problem, we design a new strategy to evaluate faithfulness by relying on cross-lingual models and datasets. Before diving into details, let us recall Corrolary 2 from Jacovi and Goldberg (2020). + +Corrolary 2 An interpretation system is unfaithful if it provides different interpretations for similar inputs and outputs. + +The key intuition behind our method is to use translation pairs to provide similar inputs to a single model. In particular, we assume a multilingual model that can accept inputs from different + +![](images/202e7a29ab9c144498336647ff314a9bef7c096ba5bc466b19f428d12d85784d.jpg) + +![](images/e6937bb7b450ddd90f8a14ca5c340e5a8ad75d8703ebe2003cc3395f82b34ad6.jpg) +Figure 1: Illustration of cross-lingual faithfulness evaluation. (a) For any en-XX sentence pair (in this example, English-German), we pass each item of the pair through the cross-lingual model and attribution method, to get attribution scores. (b) We extract word alignments by using awesome-align and (c) align scores for the words in German with the ones in the English language by summing the scores of corresponding German words for each English word. (d) Finally, we get two different distributions for the English sentence: the calculated attribution scores and the aligned attribution scores. We compare them to evaluate faithfulness. + +![](images/1534c44872a34827c58928e49660e65fb347f973f94aac9b3571c0dc4e7bba32.jpg) + +languages, such as multilingual BERT (mBERT; Devlin et al. 2019). Then, we can examine the attribution scores of matching parts (words or phrases) of the similar inputs.2 + +This idea consists of several steps. First, construct multiway translation pairs of which source and targets are English and another languages, respectively. Second, calculate attribution scores for instances in English and other languages. Third, align the attribution scores between source and target through word alignments. Finally, correlate attribution scores computed for English instances with the ones for corresponding words in other languages. By looking at the correlation between corresponding parts of the inputs, we measure how consistent the model is for similar inputs. Figure 1 illustrates the cross-lingual faithfulness evaluation procedure. + +More formally, let $\mathbf{x}_c^{(i)} = \langle x_{c,1}^{(i)},x_{c,2}^{(i)},\ldots ,x_{c,n}^{(i)}\rangle$ denote the $i$ -th instance of the dataset for language $c$ (out of $C$ languages), where $x_{c,j}^{(i)}$ stands for the $j$ -th word of the instance. Let $A = \{(x_{en,k}^{(i)},x_{c,j}^{(i)}):x_{en,k}^{(i)}\in \mathbf{x}_{en}^{(i)},x_{c,j}^{(i)}\in \mathbf{x}_c^{(i)}\}$ be the set of words from $\mathbf{x}_c^{(i)}$ that are aligned with words in the corresponding English sentence, $\mathbf{x}_{en}^{(i)}$ . Denote by + +$\omega_{x_{c,j}}^{(i)}$ the attribution score for word $x_{c,j}^{(i)}$ and let $\omega_{\mathbf{X}_c}^{(i)} = \langle \omega_{x_{c,1}}^{(i)},\omega_{x_{c,2}}^{(i)},\dots ,\omega_{x_{c,n}}^{(i)}\rangle$ . In order to align attribution scores for instances from another language with the English ones, we define the aligned attribution score for each word in the reference language as the sum of the attribution scores of the corresponding words in the target language: + +$$ +\bar {\omega} _ {x _ {c, k}} ^ {(i)} = \sum_ {\left(x _ {e n, k} ^ {(i)}, x _ {c, j} ^ {(i)}\right) \in A} \omega_ {x _ {c, j}} ^ {(i)} \tag {3} +$$ + +By aligning scores, we obtain equivalent attribution scores in the target language for each word in the source language. For the example in Figure 1, we have $\overline{\omega}_{8\mathrm{pm}}^{(i)} = \omega_{20}^{(i)} + \omega_{\mathrm{Uhr}}^{(i)}$ , because $\{(8\mathrm{pm},20),(8\mathrm{pm},\mathrm{Uhr})\} \subset A$ . + +Finally, we define the cross-lingual faithfulness $(\rho)$ of a dataset as the average Spearman correlation between attribution scores for English and aligned attribution scores for all other languages: + +$$ +\rho = \frac {1}{C - 1} \frac {1}{M} \sum_ {c \neq e n} \sum_ {i = 0} ^ {M} \rho_ {\omega_ {\mathbf {x} _ {e n}} ^ {(i)}, \bar {\omega} _ {\mathbf {x} _ {c}} ^ {(i)}} \tag {4} +$$ + +The main advantage of this approach is in avoiding the OOD problem: Translation pairs form naturally occurring perturbations that are part of the model's training distribution, unlike the synthetic inputs formed by erasure-based + +lingual model performs best on it and since the word aligner we use was originally fine-tuned and evaluated on en-XX language pairs. + +methods. We also reduce language-specific bias by using translations of the same sentence in different languages. Furthermore, our approach provides a grayscale notion of faithfulness, as advocated by Jacovi and Goldberg (2020). + +# 3.1.2 Erasure-based Faithfulness Evaluation + +We compare our method with erasure-based faithfulness evaluation metrics, namely sufficiency and comprehensiveness (DeYoung et al., 2020). We stick to DeYoung et al.'s definitions and choices along the experiments. + +Let $m(\mathbf{x}^{(i)})_j$ be the model output of the $j$ -th class for the $i$ -th data point and $r^{(i)}$ be the most important tokens to be erased, decided according to attribution scores. Comprehensiveness measures the drop in prediction probability after removing the important tokens (higher values are better): + +$$ +\text {c o m p r e h s i v e n e s s} = m \left(\mathbf {x} ^ {(i)}\right) _ {j} - m \left(\mathbf {x} ^ {(i)} \backslash r ^ {(i)}\right) _ {j} \tag {5} +$$ + +Sufficiency measures the drop when only the important tokens are kept (lower values are better): + +$$ +\text {s u f f i c i e n c y} = m \left(\mathbf {x} ^ {(i)}\right) _ {j} - m \left(r ^ {(i)}\right) _ {j} \tag {6} +$$ + +$r^{(i)}$ is the top- $k_{d}$ words according to their attribution scores, where $k_{d}$ depends on the dataset. However, choosing an appropriate $k$ can be tricky, especially when human rationales are not available to decide an average length. Also, the variable $k_{d}$ makes scores incomparable across datasets. To solve these issues, DeYoung et al. propose Area Over Perturbation Curve (AOPC) metrics for sufficiency and comprehensiveness, based on bins of tokens to be deleted. They calculate comprehensiveness and sufficiency when deleting the top tokens in each bin, and obtain AOPC metrics by averaging the scores for each bin. Here we group the top $1\%$ , $5\%$ , $10\%$ , $20\%$ , $50\%$ tokens into bins in the order of decreasing attribution scores. + +# 3.2 Faithfulness Experiments + +Experimental setup We use the XNLI dataset (Conneau et al., 2018) to construct translation pairs where source and target are English and other languages, respectively. We use awesome-align (Dou and Neubig, 2021) to align attribution scores for the corresponding words in translation pairs.4 + +
Methodρ
TPLoss
InputXGradient (μ).0588.0756
InputXGradient (L2).7202.7208
Saliency (μ).5676.5680
Saliency (L2).5664.5670
GuidedBackprop (μ).0026.0020
GuidedBackprop (L2).5664.5670
IntegratedGrads (μ).1878.2439
IntegratedGrads (L2).6095.5636
Activation (μ).5552.5552
Activation (L2).6965.6965
LIME.0421.0677
Occlusion.1480.2049
Shapley.2283.2742
+ +Table 1: Cross-lingual faithfulness results: Average correlations measured for different attribution methods on the XNLI dataset. Scores are averaged across all models including different architectures and seeds. Attributions are performed with respect to the top prediction (TP) score and the loss. InputXGradient with $L_{2}$ aggregation is the best performing method in both cases. + +![](images/d56b5d22448e0c860d4a306c28140d4dc8aac38b75c57edf3fc4661caa55220b.jpg) +Figure 2: Comparison of cross-lingual faithfulness along output and aggregation dimensions. $L_{2}$ mostly outperforms mean $(\mu)$ aggregation and calculations with respect to the loss are the same as or slightly better than ones with respect to the top prediction score. + +We fine-tune mBERT and XLM- $\mathbf{R}_{\mathrm{base}}$ for English on the MNLI dataset (Williams et al., 2018a) with 3 different seeds for each. For cross-lingual faithfulness evaluation, we only use the languages that are common in the top-5 languages for both types of cross-lingual models (when performing zero-shot prediction on non-English languages). This gives Bulgarian, German, Spanish and French $(C = 5)$ . The cross-lingual performance of our models on all XNLI languages appears in Appendix A. + +# 3.2.1 Cross-lingual Faithfulness Experiments + +Table 1 shows cross-lingual faithfulness results for each attribution method, when computing attributions with regard to top prediction or loss, and when aggregating input scores with $L_{2}$ or mean aggregation. The results exhibit a large variation, indicating that our cross-lingual faithfulness evaluation is able to expose differences between attribution methods. InputXGradient with $L_{2}$ aggregation is the most faithful attribution method for both types of attribution calculation. We also observe that gradient-based attribution methods (first 8 rows in Table 1) usually generate more faithful explanations than perturbation-based ones (last two rows), in line with prior work (Atanasova et al., 2020). + +Figure 2 shows the effect of aggregation methods and output mechanisms on cross-lingual faithfulness. In all cases, $L_{2}$ aggregation outperforms mean aggregation by large margins, except for Saliency, where mean aggregation is slightly better than $L_{2}$ aggregation. Since Saliency returns the absolute value, which is analogous to $L_{1}$ aggregation, the exception in the results makes sense. Considering output mechanisms, attribution scores calculated with respect to loss are more faithful than ones calculated with respect to the top prediction score in almost all cases. For Integrated Gradients with $L_{2}$ aggregation and GuidedBackprop with mean aggregation, calculating attribution scores with respect to the top prediction score performs better. + +Recall that our cross-lingual faithfulness measure averages correlations across languages (Eq. 4). To analyze the effect of languages, especially the ones that are poorly represented by multilingual models, we repeat the same experiments with the worst-performing 3 languages: Thai, Swahili, and Urdu. Table 2 shows correlations per language when averaged across all combinations of methods, outputs and aggregations. The results show little variation across top-performing languages. When the relation between NLI performance and faithfulness is considered, it turns out there is a strong correlation between them (Pearson correlation coefficient and p-value are as follows: $r = 0.83$ , $p = 0.02$ ) and poorly represented languages yield lower faithfulness scores. Detailed results per language and attribution method are given in Appendix G. + +
bgdeesfrthswur
ρ.36.38.41.40.14.27.25
Acc.73.74.77.76.63.58.62
+ +Table 2: Cross-lingual faithfulness results $(\rho)$ per language averaged across all attribution methods on the XNLI dataset, and NLI accuracies for comparison. + +
Methodcomp. ↑suff. ↓
TPLossTPLoss
InputXGradient (μ).2945.3072.2812.2784
InputXGradient (L2).3146.2980.2479.2682
Saliency (μ).3075.3017.2588.2584
Saliency (L2).3158.3010.2640.2642
GuidedBackprop (μ).2845.2851.2739.2902
GuidedBackprop (L2).3158.3010.2640.2642
IntegratedGrads (μ).3043.2931.2860.2308
IntegratedGrads (L2).3098.3160.2670.2800
Activation (μ).2781.2781.2551.2551
Activation (L2).3111.3111.3209.3209
LIME.2968.3034.2888.2961
Occlusion.2898.3080.2887.2656
Shapley.2908.3113.2788.2592
+ +Table 3: Erasure-based faithfulness results: Average AOPC comprehensiveness and sufficiency scores for different attribution methods on the English split of XNLI. The scores are averaged across all models including different architectures and seeds. Attribution calculations are performed with respect to the top prediction score (TP) and the loss. Different attribution methods perform best for different output mechanisms in terms of comprehensiveness and sufficiency. + +# 3.2.2 Erasure-based Faithfulness Experiments + +Table 3 shows the results of erasure-based faithfulness evaluation (comprehensiveness and sufficiency), for each attribution method. In terms of comprehensiveness, Saliency and GuidedBackpropagation with $L_{2}$ aggregation are the most faithful attribution methods when the output is the top prediction score; IntegratedGradients with $L_{2}$ aggregation is the most faithful one when the output is the loss. For sufficiency, InputXGradient with $L_{2}$ and IntegratedGradients with mean aggregation seem to be the most faithful method for cases when the output is the top prediction score and loss, respectively. Interestingly, most of the results are quite similar and differences between methods are not as large as in the cross-lingual faithfulness evaluation. + +Figure 3 shows the effect of aggregation method and output mechanism on comprehensiveness. For all attribution methods, $L_{2}$ beats mean aggregation + +![](images/239175ebedb2a0b8fff0f55978f599a21d01b3cc302231d3eeef8a6c379fb76a.jpg) +Figure 3: Comparison of comprehensiveness results along output and aggregation dimensions (higher is better). $L_{2}$ outperforms mean aggregation for most cases and calculations with respect to the loss outperform or are on par with calculations with respect to the top prediction score for non-gradient-based attribution methods. + +![](images/254ac41d6838560f615047f181709bfd3fcd52e311263388f45c22bafeff7421.jpg) +Figure 4: Comparison of sufficiency results along output and aggregation dimensions (lower is better). Different aggregation and different output mechanisms perform better for different attribution methods. + +except for Saliency and InputXGradient with loss as output. While different output mechanisms are better for different methods, calculating attributions with respect to loss is as good as or slightly better than calculating with respect to the top prediction score for all non-gradient-based methods. + +Figure 4 shows the effect of the aggregation method and output mechanism on sufficiency. Unlike comprehensiveness, there is no clear supremacy of one method over another for either aggregation methods or output mechanisms. + +# 3.2.3 Cross-lingual vs. Erasure-based Faithfulness + +The results of cross-lingual faithfulness and erasure-based metrics (comprehensiveness and sufficiency) differ in two main aspects: + +- Perturbation-based methods exhibit more faithful explanations when evaluated by erasure-based metrics than when evaluated by cross-lingual + +faithfulness. We interpret this pattern as a result of the OOD issue caused by erasure-based evaluation, which unjustifiably favors perturbation-based attributions. The relative improvement for perturbation-based methods can be attributed to noise due to the OOD perturbations used for calculating comprehensiveness and sufficiency. + +- Erasure-based faithfulness metrics are unable to properly distinguish between different attribution methods, since the differences are dwarfed by the noise introduced by the OOD perturbations. The standard deviation of faithfulness scores across all attribution methods is 0.25 for cross-lingual faithfulness, but only 0.01 and 0.02 for comprehensiveness and sufficiency, respectively. + +# 4 Plausibility + +In this section, we present details about plausibility evaluation and results, and introduce a new dataset containing highlight-based explanations in multiple languages. + +# 4.1 Plausibility Evaluation + +To evaluate the plausibility of attribution methods, we measure agreement with human rationales, following Atanasova et al. (2020). This evaluation measures how much the attribution scores overlap with human annotations by calculating Mean Average Precision (MAP) across a dataset. For each instance in the dataset, Average Precision (AP) is calculated by comparing attribution scores $\omega^{(i)}$ with gold rationales, $\mathbf{w}^{(i)}$ , where $\omega^{(i)}$ stands for the attribution scores calculated for the dataset instance $\mathbf{x}^{(i)}$ and $\mathbf{w}^{(i)}$ stands for the sequence of binary labels indicating whether the token is annotated as the rationale. For a dataset $X = \{\mathbf{x}^{(i)}|i\in [1,M]\}$ , the MAP score is defined as: + +$$ +\operatorname {M A P} (\omega , X) = \frac {1}{M} \sum_ {i \in [ 1, M ]} A P \left(\mathbf {w} ^ {(i)}, \boldsymbol {\omega} ^ {(i)}\right) \tag {7} +$$ + +Note that AP is the weighted mean of precisions at each threshold where the weight is the change in recall between two successive thresholds. + +# 4.2 Plausibility Experiments + +Experimental Setup We use the e-SNLI dataset (Camburu et al., 2018) to obtain human annotated highlights. As the classifier, we use a BERT-base model fine-tuned on the SNLI dataset (Bowman et al., 2015b) with 2 different seeds, as well as the one provided by TextAttack (Morris et al., 2020). + +
MethodMAP
TPLoss
InputXGradient (μ).395.397
InputXGradient (L2).651.653
Saliency (μ).653.655
Saliency (L2).653.655
GuidedBackprop (μ).413.414
GuidedBackprop (L2).653.655
IntegratedGrads (μ).473.465
IntegratedGrads (L2).633.599
Activation (μ).230.230
Activation (L2).437.437
LIME.407.400
Occlusion.547.476
Shapley.522.460
+ +Table 4: Plausibility results: MAP scores for different attribution methods on the e-SNLI dataset averaged across models. Attribution calculations are performed with respect to the top prediction score (TP) and the loss. Saliency with both aggregations and GuidedBackprop with $L_{2}$ aggregation are the best performing methods in both cases. + +Results Table 4 shows GuidedBackprop with $L_{2}$ aggregation and Saliency with both aggregations are the most plausible methods for both types of output. Like cross-lingual faithfulness results, gradient-based methods mostly generate more plausible explanations than perturbation-based ones, as in prior work (Atanasova et al., 2020). + +Figure 5 shows the effect of aggregation method and output mechanism on plausibility. In all cases, $L_{2}$ outperforms mean aggregation by large margins except for Saliency, where the scores for mean aggregation are the same as those for $L_{2}$ aggregation. Considering that Saliency returns the absolute value, which is analogous to $L_{1}$ aggregation, the exception in the results makes sense as in the cross-lingual faithfulness results. In almost all cases, calculating attribution scores with respect to loss is the same or slightly better than calculating with respect to the top prediction score. For Integrated Gradients, Occlusion, and LIME, choosing the top prediction score as output outperforms the loss. + +# 4.3 e-XNLI dataset + +Since prior studies for plausibility evaluation are limited to English-only datasets for the NLI task, we augment the multilingual XNLI dataset (Connieu et al., 2018) with highlight-based explanations + +![](images/aa03c3fd8a2e47a13eecb899cc0c094afacafff2523d510965e77ae854ff2e78.jpg) +Figure 5: Comparison of plausibility results along output and aggregation dimensions. $L_{2}$ outperforms mean aggregation for almost all attribution methods and choosing loss as output is mostly the same or slightly better than the top prediction score. + +
LangMAPLangMAPLangMAP
ar0.663es0.766th0.932
bg0.701fr0.739tr0.665
de0.732hi0.604ur0.575
el0.696ru0.686vi0.572
en1.0sw0.58zh0.543
+ +Table 5: Plausibility results: MAP scores measured on the newly introduced e-XNLI dataset (using Saliency with loss as output and $L_{2}$ aggregation). + +by utilizing attribution methods. + +First, we compute attribution scores on the English split of the XNLI dataset using an mBERT model fine-tuned on MNLI and Saliency with $L_{2}$ aggregation and loss as output, which gave the most plausible attribution on e-SNLI (Section 4.2). To extract rationales from the English split, we binarize the attribution scores with respect to the threshold, 0.167, giving the best F1 score on e-SNLI with the TextAttack model. Finally, we project extracted rationales to other languages using awesome-align. + +To validate the automatically generated highlights, we follow two approaches. First, we measure the plausibility of the same attribution method used to extract rationales for those languages. This approach investigates whether the aligned rationales are able to follow the same reasoning paths for each language. As Table 5 shows, the automatically aligned highlights in e-XNLI are similarly plausible explanations for most languages. + +
LanguagePrecisionRecallF1
ar.64.73.68
en.79.78.79
ru.93.78.85
tr.77.71.74
+ +Table 6: Human evaluation for a sample of e-XNLI: Precision, recall and F1 scores for four languages. + +Second, we perform a human evaluation on a subset of the created dataset. For four XNLI languages, we sample 10 examples per label (30 total) and request annotators to evaluate the correctness of highlights by following the same procedure carried out in e-SNLI (Camburu et al., 2018). Then, we measure precision, recall, and F1 scores between automatically generated highlights and those manually edited by human annotators. As Table 6 shows, automatically generated highlights mostly agree with human reasoning. We present more details about the human evaluation in Appendix H. + +We make the e-XNLI dataset publicly available under MIT license to facilitate research on explainable NLP in a multilingual setting. + +# 5 Limitations + +In this work, we examine a wide range of attribution methods along output and aggregation dimensions. Prior work (Madsen et al., 2021) shows that faithfulness of attribution methods depends on both tasks and models, but our work is limited to the NLI task while considering different models. Despite the importance of NLI for evaluation in NLP (Poliak, 2020), our conclusions might not generalize to other tasks. In addition, while we experiment with multiple random seeds, our experiments are limited to two architectures: BERT (Devlin et al., 2019) and XLM-R (Conneau et al., 2020). + +The results of cross-lingual faithfulness experiments are sensitive to language choice as discussed in Section 3.2.1, so we present the results calculated with the languages well-represented by multilingual models. The multilingual dataset we provide, e-XNLI, consists of automatically-extracted highlight-based explanations and should be used with caution for future exNLP studies since we only performed a human evaluation on a small subset of the dataset. Especially, training self-explanatory models with this dataset can lead to undesired outcomes, such as poor explanation quality. + +# 6 Conclusion + +We introduce a novel cross-lingual strategy to evaluate the faithfulness of attribution methods, which eliminates the out-of-distribution input problem of common erasure-based faithfulness evaluations. Then, we perform a comprehensive comparison of different attribution methods having different characteristics in terms of plausibility and faithfulness for the NLI task. The experiments show that there is no one-size-fits-all solution for local post-hoc explanations. Our results highlight that practitioners should choose an attribution method with proper output mechanism and aggregation method according to the property of explanation in question: + +- For most attribution methods, $L_{2}$ aggregation and attribution calculation with respect to loss provide more faithful and plausible explanations. +- Erasure-based faithfulness metrics cannot properly differentiate different attribution methods. +- Gradient-based attribution methods usually generate more plausible and faithful explanations than perturbation-based methods. +- To obtain the most plausible explanations, one should choose Guided Backpropagation with $L_{2}$ and Saliency with either aggregation method, and calculate scores with respect to the loss. +- To obtain the most faithful explanations, one should choose InputXGradient with $L_{2}$ regardless of output mechanism. + +Finally, we present e-XNLI, a multilingual dataset with automatically generated highlight explanations, to facilitate multilingual exNLP studies. + +# Acknowledgements + +We would like to thank Adir Rahamim for the ideas on representational similarity experiments of multilingual models, Oleg Serikov for evaluating automatically extracted highlights in the Russian subset of our e-XNLI dataset, and Ramazan Pala for reviews while drafting this paper. This research was partly supported by the ISRAEL SCIENCE FOUN-DATION (grant No. 448/20) and by an Azrieli Foundation Early Career Faculty Fellowship. + +# References + +Julius Adebayo, Justin Gilmer, Ian J. Goodfellow, and Been Kim. 2018. Local explanation methods for deep neural networks lack sensitivity to parameter values. ArXiv, abs/1810.03307. +Leila Arras, F. Horn, Gregoire Montavon, Klaus-Robert Müller, and Wojciech Samek. 2017. "what is relevant in a text document?: An interpretable machine learning approach. PLoS ONE, 12. +Pepa Atanasova, Jakob Grue Simonsen, Christina Lioma, and Isabelle Augenstein. 2020. A diagnostic study of explainability techniques for text classification. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3256-3274, Online. Association for Computational Linguistics. +Sebastian Bach, Alexander Binder, Gregoire Montavon, Frederick Klauschen, Klaus-Robert Müller, and Wojciech Samek. 2015. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLOS ONE, 10(7):1-46. +Jasmijn Bastings and Katja Filippova. 2020. The elephant in the interpretability room: Why use attention as explanation when we have saliency methods? In Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 149-155, Online. Association for Computational Linguistics. +Johan Bos and Katja Markert. 2005. Recognising textual entailment with logical inference. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 628-635, Vancouver, British Columbia, Canada. Association for Computational Linguistics. +Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015a. A large annotated corpus for learning natural language inference. In EMNLP. +Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015b. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics. +Oana-Maria Camburu, Tim Rocktäschel, Thomas Lukasiewicz, and Phil Blunsom. 2018. e-snli: Natural language inference with natural language explanations. In NeurIPS. +Cleo Condoravdi, Dick Crouch, Valeria de Paiva, Reinhard Stolle, and Daniel G. Bobrow. 2003. Entailment, intensionality and text understanding. In Proceedings of the HLT-NAACL 2003 Workshop on Text Meaning, pages 38-45. + +Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In ACL. +Alexis Conneau, Rudy Rinott, Guillaume Lample, Adina Williams, Samuel R. Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. Xnli: Evaluating crosslingual sentence representations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics. +The Fracas Consortium, Robin Cooper, Dick Crouch, Jan Van Eijck, Chris Fox, Josef Van Genabith, Jan Jaspars, Hans Kamp, David Milward, Manfred Pinkal, Massimo Poesio, Steve Pulman, Ted Briscoe, Holger Maier, and Karsten Konrad. 1996. Using the framework. +Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The pascal recognising textual entailment challenge. In MLCW. +Marina Danilevsky, Kun Qian, Ranit Aharonov, Yannis Katsis, Ban Kawas, and Prithviraj Sen. 2020. A survey of the state of explainable AI for natural language processing. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, pages 447-459, Suzhou, China. Association for Computational Linguistics. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. +Jay DeYoung, Sarthak Jain, Nazneen Fatema Rajani, Eric Lehman, Caiming Xiong, Richard Socher, and Byron C. Wallace. 2020. ERASER: A benchmark to evaluate rationalized NLP models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4443-4458, Online. Association for Computational Linguistics. +Shuoyang Ding and Philipp Koehn. 2021. Evaluating saliency methods for neural language models. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5034-5052, Online. Association for Computational Linguistics. +Zi-Yi Dou and Graham Neubig. 2021. Word alignment by fine-tuning embeddings on parallel corpora. In Proceedings of the 16th Conference of the European + +Chapter of the Association for Computational Linguistics: Main Volume, pages 2112-2128, Online. Association for Computational Linguistics. +M. Fomicheva, Lucia Specia, and Nikolaos Aletras. 2021. Translation error detection as rationale extraction. ArXiv, abs/2108.12197. +Nora Hollenstein and Lisa Beinborn. 2021. Relative importance in sentence processing. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 141-150, Online. Association for Computational Linguistics. +Alon Jacovi and Yoav Goldberg. 2020. Towards faithfully interpretable NLP systems: How should we define and evaluate faithfulness? In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4198-4205, Online. Association for Computational Linguistics. +Sarthak Jain and Byron C. Wallace. 2019. Attention is not explanation. In *NAACL*. +Andrej Karpathy, Justin Johnson, and Li Fei-Fei. 2015. Visualizing and understanding recurrent networks. ArXiv, abs/1506.02078. +Siwon Kim, Jihun Yi, Eunji Kim, and Sungroh Yoon. 2020. Interpretation of NLP models through input marginalization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3154-3167, Online. Association for Computational Linguistics. +Narine Kokhlikyan, Vivek Miglani, Miguel Martin, Edward Wang, Bilal Alsallakh, Jonathan Reynolds, Alexander Melnikov, Natalia Kliushkina, Carlos Araya, Siqi Yan, and Orion Reblitz-Richardson. 2020. Captum: A unified and generic model interpretability library for pytorch. +Simon Kornblith, Mohammad Norouzi, Honglak Lee, and Geoffrey Hinton. 2019. Similarity of neural network representations revisited. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 3519-3529. PMLR. +Quentin Lhoest, Albert Villanova del Moral, Yacine Jernite, Abhishek Thakur, Patrick von Platen, Suraj Patil, Julien Chaumont, Mariama Drame, Julien Plu, Lewis Tunstall, Joe Davison, Mario Šaško, Gunjan Chhablani, Bhavitvya Malik, Simon Brandeis, Teven Le Scao, Victor Sanh, Canwen Xu, Nicolas Patry, Angelina McMillan-Major, Philipp Schmid, Sylvain Gugger, Clément Delangue, Theo Matussière, Lysandre Debut, Stas Bekman, Pierrick Cistac, Thibault Goehringer, Victor Mustar, François Lagunas, Alexander Rush, and Thomas Wolf. 2021. Datasets: A community library for natural language processing. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 175-184, Online + +and Punta Cana, Dominican Republic. Association for Computational Linguistics. +Bill MacCartney and Christopher D. Manning. 2009. An extended model of natural logic. In Proceedings of the Eight International Conference on Computational Semantics, pages 140-156, Tilburg, The Netherlands. Association for Computational Linguistics. +Andreas Madsen, Nicholas Meade, Vaibhav Adlakha, and Siva Reddy. 2021. Evaluating the faithfulness of importance measures in nlp by recursively masking allegedly important tokens and retraining. ArXiv, abs/2110.08412. +John Morris, Eli Lifland, Jin Yong Yoo, Jake Grigsby, Di Jin, and Yanjun Qi. 2020. Textattack: A framework for adversarial attacks, data augmentation, and adversarial training in nlp. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 119-126. +F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830. +Adam Poliak. 2020. A survey on recognizing textual entailment as an NLP evaluation. In Proceedings of the First Workshop on Evaluation and Comparison of NLP Systems, pages 92-109, Online. Association for Computational Linguistics. +Grusha Prasad, Yixin Nie, Mohit Bansal, Robin Jia, Douwe Kiela, and Adina Williams. 2021. To what extent do human explanations of model behavior align with actual model behavior? In Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 1-14, Punta Cana, Dominican Republic. Association for Computational Linguistics. +Alec Radford and Karthik Narasimhan. 2018. Improving language understanding by generative pretraining. +Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. "why should i trust you?": Explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. +M. Robnik-Sikonja and Marko Bohanec. 2018. Perturbation-based explanations of prediction models. In Human and Machine Learning. +Hassan Sajjad, Narine Kokhlikyan, Fahim Dalvi, and Nadir Durrani. 2021. Fine-grained interpretation and causation analysis in deep NLP models. In Proceedings of the 2021 Conference of the North American + +Chapter of the Association for Computational Linguistics: Human Language Technologies: Tutorials, Online. +Stefan Schweter. 2020. Berturk - bert models for turkish. +Sofia Serrano and Noah A. Smith. 2019. Is attention interpretable? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2931-2951, Florence, Italy. Association for Computational Linguistics. +Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. 2017. Learning important features through propagating activation differences. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 3145-3153. PMLR. +Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. 2014. Deep inside convolutional networks: Visualising image classification models and saliency maps. In Workshop at International Conference on Learning Representations. +Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda B. Viégas, and Martin Wattenberg. 2017. Smooth-grad: removing noise by adding noise. ArXiv, abs/1706.03825. +J.T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller. 2015. Striving for simplicity: The all convolutional net. In ICLR (workshop track). +Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic attribution for deep networks. In Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML'17, page 3319-3328. JMLR.org. +Erik Štrumbelj and Igor Kononenko. 2010. An efficient explanation of individual classifications using game theory. J. Mach. Learn. Res., 11:1-18. +Eric Wallace, Matthew Thomas Gardner, and Sameer Singh. 2020. Interpreting predictions of nlp models. In EMNLP. +Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. In Black-boxNLP@EMNLP. +Sarah Wiegrefe and Ana Marasovic. 2021. Teach me to explain: A review of datasets for explainable nlp. In Proceedings of NeurIPS. +Adina Williams, Nikita Nangia, and Samuel Bowman. 2018a. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122. Association for Computational Linguistics. + +Adina Williams, Nikita Nangia, and Samuel R. Bowman. 2018b. A broad-coverage challenge corpus for sentence understanding through inference. In *NAACL*. +Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics. +Matthew D. Zeiler and Rob Fergus. 2014. Visualizing and understanding convolutional networks. In ECCV. + +# A Cross-lingual performance of multilingual classifiers + +Table 7 shows the average accuracies of the mBERT and XLM-R models fine-tuned on MNLI for each language in the XNLI dataset. Both models are fine-tuned for 3 epochs with learning rate 2e-5, total batch size of 256 and 3 different seeds. + +
LanguagemBERTXLM-Rbase
ar0.65740.7132
bg0.69520.7745
de0.71200.7649
el0.67240.7597
en0.81470.8436
es0.75040.7887
fr0.73580.7774
hi0.60610.6959
ru0.69060.7549
sw0.51370.6558
th0.54680.7143
tr0.63230.7269
ur0.58560.6528
vi0.70430.7466
zh0.69520.7318
+ +Table 7: Accuracies averaged across seeds of the mBERT and XLM- $\mathbf{R}_{\mathrm{base}}$ models fine-tuned on MNLI for each XNLI language. + +# B Attribution Methods + +In this work, we focus on a wide range of attribution methods by investigating different combinations of output mechanisms and aggregation methods. We consider two different output options while calculating importance scores per word: (a) top prediction score; (b) loss value calculated when the ground truth label is given. In the following, we refer to the output as $f_{tp}$ when it is the top prediction score and $f_{\mathcal{L}}$ when it is the loss. While some methods inherently return a single score per word, some of them return importance scores for each dimension of the corresponding word vector. Since we want to obtain a single score per word, those scores need to be aggregated. We investigate $L_{2}$ and mean aggregations separately. + +Implementation Details We build our framework upon the Captum library (Kokhlikyan et al., 2020) to use existing implementations of many attribution methods. We use the HuggingFace transformers (Wolf et al., 2020) and datasets (Lhoest + +et al., 2021) libraries to access pretrained models and datasets. Also, we rely upon Scikit-learn (Pedregosa et al., 2011) for evaluation scores such as Average Precision (AP) and Spearman Correlation. + +# B.1 Saliency + +Saliency (Simonyan et al., 2014) calculates attribution scores by calculating the absolute value of the gradients with respect to inputs. More formally, let $u_{j}$ be the embedding for word $x_{j}$ of $\mathbf{x}^{(i)}$ , the $i$ -th instance of any dataset. Then the attribution score per each dimension of the embedding is defined as + +$$ +\left| \nabla_ {u _ {j k}} f \left(\mathbf {x} ^ {(i)}\right) \right| \tag {8} +$$ + +We obtain an attribution score per word, $\omega_{x_j}^{(i)}$ , by aggregating scores across each word embedding. Using mean aggregation, it is defined as follows: + +$$ +\omega_ {x _ {j}} ^ {(i)} = \frac {1}{d} \sum_ {k = 0} ^ {d} \left| \nabla_ {u _ {j k}} f (\mathbf {x} ^ {(i)}) \right| \tag {9} +$$ + +where $d$ is the number of dimensions for the word embedding. Similarly, using $L_{2}$ aggregation, we obtain + +$$ +\omega_ {x _ {j}} ^ {(i)} = \sqrt {\sum_ {k = 0} ^ {d} \left| \nabla_ {u _ {j k}} f (\mathbf {x} ^ {(i)}) \right| ^ {2}} \tag {10} +$$ + +# B.2 InputXGradient + +InputXGradient (Shrikumar et al., 2017) calculates attribution scores by multiplying the input with the gradients with respect to the input. More formally, the attribution score per each dimension is defined as + +$$ +\nabla_ {u _ {j k}} f \left(\mathbf {x} ^ {(i)}\right) u _ {j k} \tag {11} +$$ + +We obtain attribution scores per word in the same way as Saliency using mean/L2 aggregations. + +# B.3 Guided Backpropagation + +Guided Backpropagation (Springenberg et al., 2015) produces attribution scores by calculating gradients with respect to the input. Different from other methods, it overrides the gradient of the ReLU activation so that only positive gradients pass through. We obtain attribution scores per word using $L_{2}$ and mean aggregations as in the previously described methods. + +# B.4 Integrated Gradients + +Integrated Gradients (Sundararajan et al., 2017) produces attribution scores by summing gradients along each dimension from some baseline input to a given input. The attribution score per each + +dimension is defined as + +$$ +u _ {j k} ^ {(i)} - \bar {u} _ {j k} ^ {(i)} \times \sum_ {l = 1} ^ {m} \frac {\partial f \left(\bar {u} _ {j k} ^ {(i)} + \frac {l}{m} \times \left(u _ {j k} ^ {(i)} - \bar {u} _ {j k} ^ {(i)}\right)\right)}{\partial u _ {j k} ^ {(i)}} \times \frac {1}{m} \tag {12} +$$ + +where $m$ is the number of steps for a Riemannian approximation of the path integral and $\overline{u}_j^{(i)}$ is the baseline input. We use the word embedding of the [PAD] token as the baseline input for each word except for [SEP] and [CLS] tokens (Sajjad et al., 2021). We obtain attribution scores per word using $L_{2}$ and mean aggregations as in the previous methods. + +Higher values of $m$ would produce a better approximation, but also make attribution calculation computationally expensive. We need to find a sweet spot between approximation and computational resources. For plausibility experiments, we select $m$ according to validation performance based on MAP scores. Among $\{50, 75, 100\}$ , we choose $m = 100$ for mean aggregation on calculations with respect to top prediction and $m = 50$ for all other combinations of aggregation methods and output mechanisms. For cross-lingual faithfulness experiments, we select $m$ according to the evaluation on the validation set based on the Spearman correlation coefficient values. Among $\{50, 75, 100\}$ , we choose $m = 100$ for all calculations with XLM-Rbase and mBERT except calculations involving mean aggregation on mBERT, for which we choose $m = 75$ . For erasure-based faithfulness experiments, we use the same values of $m$ for the sake of a fair comparison. + +# B.5 LIME + +LIME (Ribeiro et al., 2016) produces attribution scores by training a surrogate linear model using the points around the input created by perturbing the input and output of perturbations from the original model. A random subset of the input is replaced by a baseline value to create perturbations. We use the word embedding of the [PAD] token as the baseline value (as in Integrated Gradients). Since we create the perturbations by replacing whole word vectors, we obtain a single score per word, which eliminates the need for aggregation. We use 50 samples for training the surrogate model as the default value for the LIME implementation in Captum. + +# B.6 Occlusion + +Occlusion (Zeiler and Fergus, 2014) produces attribution scores by calculating differences in the + +output after replacing the input with baseline values over a sliding window. We select the shape of the sliding window so that it occludes only the embedding of one word at a time, and we use the word embedding of the [PAD] token as a baseline value (as in Integrated Gradients and LIME). Since we create the perturbations by replacing whole word vectors, we obtain a single score per word. + +# B.7 Shapley Value Sampling + +In Shapley Value Sampling (Strumbelj and Kononenko, 2010), we take a random permutation of input, which is word embeddings of input sequence in our case, and add them one by one to a given baseline, embedding vector for [PAD] token in our case, to produce attribution score by calculating the difference in the output. The scores are averaged across several samples. We choose the feature group so that one score corresponds to a single word, which eliminates the need for aggregation. We take 25 samples for calculating attributions as the default value for Shapley Value Sampling implementation in Captum. + +# B.8 Activation + +Layer Activation (Karpathy et al., 2015) produces attribution scores by getting the activations in the output of the specified layer. We select the embedding layer for this purpose, which yields an attribution score per each dimension of the embedding equal to $u_{jk}$ . Then, we obtain attribution scores per word using $L_{2}$ and mean aggregations as in other methods. + +# C Representational Similarity of Translation Pairs + +Our cross-lingual faithfulness strategy relies on the assumption that translation pairs constitute similar inputs for a multilingual model. To test our assumption, we create a setup comparing representational similarities of inputs. First, we take premise-hypothesis pairs and their translations from XNLI dataset for the selected language pair. We encode each pair by obtaining the last hidden state representations before the classifier head. We take $n$ representations from the source language and the corresponding representations from the target language to create source and target batch pairs, namely $(b_{s}, b_{t})$ . Then, we create $k$ random batches, $b_{i}$ , by selecting $n$ representations among target representations for each one and we measure the CKA similarity (Kornblith et al., 2019) of representation + +![](images/f561bf726eabfbff7ccb6683bef4d444e4b9128b76af9a57780cc260b35839a1.jpg) +Figure 6: Accuracies for CKA similarity analysis for different models. XLM-R-finetuned and mBERT-finetuned results are averaged across models fine-tuned with different seeds for each. + +batch in the source language with each batch of representation in the target language. For the sake of our assumption, we expect matching representation batches to be more similar than any batch pairs. For each batch in the source language, we test whether the CKA similarity measure assigns the highest similarity to the matching batches or not and compute the accuracy over batches. + +We use 5000 examples from the test split of the XNLI dataset by selecting $n = 8$ and $k = 10$ . + +Figure 6 shows accuracies for different models. We perform our similarity analysis on multilingual models, which are vanilla mBERT, mBERT and XLM-Rbase models fine-tuned on MNLI and used in our faithfulness experiments, and monolingual models, which are BERTbase fine-tuned on MNLI and a Turkish BERT (Schweter, 2020). The results show that translation pairs form similar inputs for multilingual models compared to monolingual models regardless of being fine-tuned. While the accuracies of fine-tuned mBERT are lower than standard mBERT, it differs among language pairs for XLM-Rbase case. Although monolingual representations of translation pairs lead to the lowest accuracies as expected, higher accuracies of Turkish BERT, which is pre-trained on a completely unrelated language, compared to fine-tuned English BERTbase need further investigation. + +# D Ablation Study on Cross-lingual Faithfulness + +To investigate the effect of word alignments, we run our cross-lingual faithfulness evaluation framework with random word alignments for a set of attribution methods and compare the results with the ones obtained with awesome-align (Dou and + +![](images/ac63205e206b3bdac886632c5ab2e06bacd59778129d92883001c49c7dfccd9d.jpg) +Figure 7: Comparison of cross-lingual faithfulness scores that are calculated with awesome-align and random word alignments for different attribution methods + +Neubig, 2021). To obtain random alignments between translation pairs, we modify the IterMax algorithm, which Dou and Neubig (2021) proposed as a baseline method, by replacing the similarity matrix with a random matrix. We perform both types of evaluations with one of the mBERT models we fine-tuned. + +Figure 7 shows the comparison of awesome-align with random word alignments. While using awesome-align provides comparable scores across attribution methods, random alignments lead to near-zero correlations $(\rho)$ . Thus we empirically show that word alignment forms a significant part of our method. + +# E Cross-lingual Faithfulness Results per Architecture + +Table 8 shows cross-lingual faithfulness results for each architecture, mBERT and XLM-Rbase, separately. + +# F Erasure-based Faithfulness Results per Architecture + +Table 9 and Table 10 show comprehensiveness and sufficiency scores for each architecture, mBERT and XLM-Rbase, separately. + +
Methodρ
mBERTXLM-Rbase
TPLossTPLoss
InputXGradient (μ).0562 ± .002.0758 ± .002.0615 ± .001.0754 ± .003
InputXGradient (L2).7067 ± .001.7078 ± .001.7336 ± .003.7338 ± .003
Saliency (μ).6269 ± .003.6283 ± .003.5082 ± .001.5078 ± .002
Saliency (L2).6276 ± .003.6290 ± .003.5053 ± .001.5050 ± .002
GuidedBackprop (μ).0024 ± .003-.0000 ± .001.0028 ± .000.0041 ± .001
GuidedBackprop (L2).6276 ± .003.6290 ± .003.5053 ± .001.5050 ± .002
IntegratedGrads (μ).1860 ± .008.2680 ± .007.1897 ± .021.2198 ± .008
IntegratedGrads (L2).5910 ± .009.5302 ± .005.6279 ± .018.5970 ± .017
Activation (μ).6974 ± .001.6974 ± .001.4130 ± .001.4130 ± .001
Activation (L2).6992 ± .000.6992 ± .000.6938 ± .000.6938 ± .000
LIME.0659 ± .014.0934 ± .006.0182 ± .009.0420 ± .008
Occlusion.2281 ± .007.3132 ± .005.0680 ± .028.0966 ± .007
Shapley.3734 ± .049.4058 ± .040.0833 ± .016.1426 ± .033
+ +Table 8: Cross-lingual faithfulness results: Scores are measured for different attribution methods on the XNLI dataset and averaged across models trained with different seeds for each architecture. Attribution calculations are performed with respect to the top prediction (TP) score and the loss. + +# G Cross-lingual Faithfulness Results per Language + +Our cross-lingual faithfulness evaluation averages correlations across languages. For completeness, we provide in Tables 11-17 the results of cross-lingual faithfulness evaluation per language. + +# H Human Evaluation for e-XNLI + +A subset of our dataset is evaluated by NLP researchers—the authors and a colleague of one of the authors—from Turkey, Israel, and Russia. + +The annotators followed the e-SNLI guidelines specified in Section 3 of Camburu et al. (2018) for evaluating whether automatically-extracted highlight-based explanations are correct. Note that incorrectly predicted examples are ignored during the evaluation. + +# I Computational Resources + +We mainly used Google Colab for the experiments and Titan RTX in some cases. All experiments for gradient-based attribution methods and Activation take a period of time ranging from 5 minutes to 1 hour, while perturbation-based approaches take several hours. Especially, experiments for Shapley Value Sampling take a few days since its implementation does not use batched operations. + +
Methodcomprehensiveness ↑
mBERTXLM-Rbase
TPLossTPLoss
InputXGradient (μ).2658 ± .016.2959 ± .012.3232 ± .011.3186 ± .031
InputXGradient (L2).3136 ± .011.3080 ± .005.3155 ± .021.2880 ± .005
Saliency (μ).3009 ± .018.2891 ± .036.3141 ± .008.3142 ± .005
Saliency (L2).3128 ± .018.2896 ± .037.3188 ± .009.3123 ± .004
GuidedBackprop (μ).2709 ± .002.2514 ± .039.2981 ± .023.3187 ± .015
GuidedBackprop (L2).3128 ± .018.2896 ± .037.3188 ± .009.3123 ± .004
IntegratedGrads (μ).2557 ± .033.2618 ± .010.3529 ± .021.3244 ± .018
IntegratedGrads (L2).2989 ± .004.2969 ± .014.3208 ± .028.3350 ± .031
Activation (μ).2504 ± .009.2504 ± .009.3057 ± .006.3057 ± .006
Activation (L2).2940 ± .010.2940 ± .010.3282 ± .017.3282 ± .017
LIME.2733 ± .026.2657 ± .016.3203 ± .028.3412 ± .024
Occlusion.2727 ± .034.3101 ± .014.3068 ± .029.3060 ± .016
Shapley.2660 ± .032.3123 ± .007.3157 ± .019.3103 ± .023
+ +Table 9: Comprehensiveness scores per architecture on the English split of XNLI dataset: Scores are measured for different attribution methods on the XNLI dataset and averaged across models trained with different seeds for each architecture. Attribution calculations are performed with respect to the top prediction (TP) score and the loss. + +
Methodsufficiency ↓
mBERTXLM-Rbase
TPLossTPLoss
InputXGradient (μ).2812 ± .013.2716 ± .021.2812 ± .007.2852 ± .014
InputXGradient (L2).2616 ± .043.2684 ± .027.2342 ± .023.2681 ± .026
Saliency (μ).2451 ± .028.2613 ± .029.2724 ± .011.2555 ± .004
Saliency (L2).2477 ± .022.2629 ± .027.2804 ± .008.2654 ± .007
GuidedBackprop (μ).2637 ± .031.2913 ± .007.2841 ± .032.2891 ± .023
GuidedBackprop (L2).2477 ± .022.2629 ± .027.2804 ± .008.2654 ± .007
IntegratedGrads (μ).2985 ± .008.2471 ± .024.2734 ± .020.2145 ± .011
IntegratedGrads (L2).2784 ± .010.2788 ± .021.2556 ± .006.2812 ± .025
Activation (μ).2024 ± .017.2024 ± .017.3079 ± .010.3079 ± .010
Activation (L2).3340 ± .003.3340 ± .003.3078 ± .005.3078 ± .005
LIME.2610 ± .005.2610 ± .012.3167 ± .048.3311 ± .005
Occlusion.2820 ± .006.2475 ± .008.2955 ± .015.2837 ± .006
Shapley.2538 ± .008.1967 ± .008.3037 ± .035.3218 ± .013
+ +Table 10: Sufficiency scores per architecture on the English split of XNLI dataset: Scores are measured for different attribution methods on the XNLI dataset and averaged across models trained with different seeds for each architecture. Attribution calculations are performed with respect to the top prediction (TP) score and the loss. + +
Methodρ
mBERTXLM-Rbase
TPLossTPLoss
InputXGradient (μ).0302 ± .001.0406 ± .001.0506 ± .003.0661 ± .002
InputXGradient (L2).6731 ± .001.6741 ± .002.7188 ± .003.7189 ± .003
Saliency (μ).5778 ± .004.5793 ± .004.4954 ± .000.4951 ± .002
Saliency (L2).5787 ± .004.5803 ± .004.4935 ± .000.4930 ± .001
GuidedBackprop (μ).0015 ± .001-.0045 ± .003.0020 ± .001.0023 ± .004
GuidedBackprop (L2).5787 ± .004.5803 ± .004.4935 ± .000.4930 ± .001
IntegratedGrads (μ).1248 ± .005.2003 ± .002.1768 ± .021.2037 ± .009
IntegratedGrads (L2).5287 ± .014.4585 ± .011.6165 ± .016.5827 ± .016
Activation (μ).6080 ± .001.6080 ± .001.3824 ± .001.3824 ± .001
Activation (L2).6653 ± .000.6653 ± .000.6825 ± .000.6825 ± .000
LIME.0561 ± .015.0803 ± .008.0115 ± .007.0359 ± .009
Occlusion.1635 ± .008.2395 ± .004.0509 ± .021.0754 ± .004
Shapley.3348 ± .055.3639 ± .044.0649 ± .016.1165 ± .032
+ +Table 11: Cross-lingual faithfulness results for the Bulgarian split of XNLI dataset: Scores are measured for different attribution methods on the XNLI dataset and averaged across models trained with different seeds for each architecture. Attribution calculations are performed with respect to the top prediction (TP) score and the loss. + +
Methodρ
mBERTXLM-Rbase
TPLossTPLoss
InputXGradient (μ).0493 ± .003.0717 ± .004.0621 ± .001.0752 ± .003
InputXGradient (L2).7052 ± .001.7067 ± .001.7321 ± .003.7329 ± .003
Saliency (μ).6152 ± .003.6168 ± .003.4936 ± .002.4929 ± .004
Saliency (L2).6159 ± .003.6175 ± .003.4906 ± .002.4900 ± .003
GuidedBackprop (μ).0041 ± .005.0008 ± .002.0030 ± .001.0010 ± .001
GuidedBackprop (L2).6159 ± .003.6175 ± .003.4906 ± .002.4900 ± .003
IntegratedGrads (μ).1919 ± .011.2788 ± .011.1814 ± .019.2219 ± .008
IntegratedGrads (L2).5935 ± .007.5361 ± .005.6245 ± .018.5930 ± .017
Activation (μ).6960 ± .000.6960 ± .000.4115 ± .001.4115 ± .001
Activation (L2).7012 ± .000.7012 ± .000.6934 ± .000.6934 ± .000
LIME.0692 ± .012.0955 ± .005.0186 ± .009.0399 ± .008
Occlusion.2226 ± .006.3117 ± .005.0695 ± .028.0978 ± .005
Shapley.3843 ± .046.4145 ± .036.0840 ± .017.1381 ± .030
+ +Table 12: Cross-lingual faithfulness results for the German split of XNLI dataset: Scores are measured for different attribution methods on the XNLI dataset and averaged across models trained with different seeds for each architecture. Attribution calculations are performed with respect to the top prediction (TP) score and the loss. + +
Methodρ
mBERTXLM-Rbase
TPLossTPLoss
InputXGradient (μ).0747 ± .001.1009 ± .003.0726 ± .003.0856 ± .005
InputXGradient (L2).7179 ± .001.7186 ± .001.7460 ± .003.7461 ± .003
Saliency (μ).6576 ± .002.6592 ± .002.5205 ± .001.5205 ± .002
Saliency (L2).6582 ± .002.6598 ± .002.5170 ± .001.5172 ± .001
GuidedBackprop (μ).0004 ± .003.0026 ± .003.0035 ± .003.0049 ± .002
GuidedBackprop (L2).6582 ± .002.6598 ± .002.5170 ± .001.5172 ± .001
IntegratedGrads (μ).2189 ± .008.3041 ± .005.2037 ± .020.2374 ± .006
IntegratedGrads (L2).6145 ± .010.5584 ± .006.6384 ± .018.6083 ± .018
Activation (μ).7521 ± .001.7521 ± .001.4311 ± .001.4311 ± .001
Activation (L2).7071 ± .000.7071 ± .000.7135 ± .000.7135 ± .000
LIME.0716 ± .013.1032 ± .009.0259 ± .012.0500 ± .006
Occlusion.2693 ± .008.3594 ± .006.0839 ± .035.1202 ± .014
Shapley.3928 ± .047.4238 ± .039.1041 ± .022.1716 ± .035
+ +Table 13: Cross-lingual faithfulness results for the Spanish split of XNLI dataset: Scores are measured for different attribution methods on the XNLI dataset and averaged across models trained with different seeds for each architecture. Attribution calculations are performed with respect to the top prediction (TP) score and the loss. + +
Methodρ
mBERTXLM-Rbase
TPLossTPLoss
InputXGradient (μ).0707 ± .003.0902 ± .003.0605 ± .001.0746 ± .001
InputXGradient (L2).7308 ± .001.7317 ± .001.7374 ± .003.7372 ± .003
Saliency (μ).6570 ± .003.6578 ± .002.5234 ± .001.5226 ± .002
Saliency (L2).6574 ± .003.6583 ± .002.5202 ± .001.5199 ± .002
GuidedBackprop (μ).0034 ± .005.0010 ± .003.0028 ± .000.0082 ± .002
GuidedBackprop (L2).6574 ± .003.6583 ± .002.5202 ± .001.5199 ± .002
IntegratedGrads (μ).2082 ± .009.2887 ± .009.1968 ± .025.2163 ± .009
IntegratedGrads (L2).6274 ± .008.5676 ± .004.6321 ± .017.6040 ± .016
Activation (μ).7333 ± .001.7333 ± .001.4271 ± .001.4271 ± .001
Activation (L2).7234 ± .000.7234 ± .000.6857 ± .000.6857 ± .000
LIME.0668 ± .016.0945 ± .005.0168 ± .011.0420 ± .008
Occlusion.2568 ± .008.3422 ± .007.0678 ± .029.0929 ± .007
Shapley.3816 ± .047.4209 ± .040.0803 ± .011.1442 ± .034
+ +Table 14: Cross-lingual faithfulness results for the French split of XNLI dataset: Scores are measured for different attribution methods on the XNLI dataset and averaged across models trained with different seeds for each architecture. Attribution calculations are performed with respect to the top prediction (TP) score and the loss. + +
Methodρ
mBERTXLM-R
TPLossTPLoss
InputXGradient (μ).0024 ± .001.0051 ± .001.0076 ± .002.0174 ± .001
InputXGradient (L2).2924 ± .001.2953 ± .001.2749 ± .001.2761 ± .001
Saliency (μ).2166 ± .003.2196 ± .004.1712 ± .002.1702 ± .002
Saliency (L2).2168 ± .003.2198 ± .004.1705 ± .002.1692 ± .002
GuidedBackprop (μ).0007 ± .003.0007 ± .001.0017 ± .001.0025 ± .000
GuidedBackprop (L2).2168 ± .003.2198 ± .004.1705 ± .002.1692 ± .002
IntegratedGrads (μ).0286 ± .008.0758 ± .012.0963 ± .014.1173 ± .017
IntegratedGrads (L2).2597 ± .009.2433 ± .009.2059 ± .008.1970 ± .008
Activation (μ).2462 ± .000.2462 ± .000.1307 ± .000.1307 ± .000
Activation (L2).2127 ± .000.2127 ± .000.2007 ± .000.2007 ± .000
LIME.0281 ± .022.0173 ± .008.0041 ± .002.1552 ± .003
Occlusion.0451 ± .011.0591 ± .009.0128 ± .003.0305 ± .002
Shapley.3001 ± .063.2461 ± .051.0283 ± .013.0741 ± .015
+ +Table 15: Cross-lingual faithfulness results for the Thai split of XNLI dataset: Scores are measured for different attribution methods on the XNLI dataset and averaged across models trained with different seeds for each architecture. Attribution calculations are performed with respect to the top prediction (TP) class and the loss. + +
Methodρ
mBERTXLM-R
TPLossTPLoss
InputXGradient (μ).0064 ± .001.0132 ± .002.0167 ± .002.0253 ± .003
InputXGradient (L2).4598 ± .001.4616 ± .001.5254 ± .002.5255 ± .002
Saliency (μ).4083 ± .002.4107 ± .002.4136 ± .002.4132 ± .002
Saliency (L2).4084 ± .002.4108 ± .002.4110 ± .002.4104 ± .002
GuidedBackprop (μ).0043 ± .002-.0015 ± .003.0002 ± .003.0015 ± .001
GuidedBackprop (L2).4084 ± .002.4108 ± .002.4110 ± .002.4104 ± .002
IntegratedGrads (μ).0713 ± .005.1239 ± .006.1137 ± .021.1223 ± .005
IntegratedGrads (L2).3806 ± .005.3298 ± .005.4883 ± .009.4634 ± .011
Activation (μ).4686 ± .001.4686 ± .001.3400 ± .001.3400 ± .001
Activation (L2).4987 ± .000.4987 ± .000.5449 ± .000.5449 ± .000
LIME.0257 ± .015.0752 ± .002.0128 ± .005.1854 ± .001
Occlusion.0537 ± .003.0988 ± .001.0416 ± .016.0628 ± .010
Shapley.2424 ± .044.2622 ± .044.0612 ± .015.1174 ± .014
+ +Table 16: Cross-lingual faithfulness results for the Swahili split of XNLI dataset: Scores are measured for different attribution methods on the XNLI dataset and averaged across models trained with different seeds for each architecture. Attribution calculations are performed with respect to the top prediction (TP) class and the loss. + +
Methodρ
mBERTXLM-R
TPLossTPLoss
InputXGradient (μ).0156 ± .003.0211 ± .005.0147 ± .002.0225 ± .004
InputXGradient (L2).5522 ± .002.5533 ± .002.5031 ± .003.5037 ± .003
Saliency (μ).4492 ± .004.4509 ± .004.2512 ± .004.2513 ± .004
Saliency (L2).4499 ± .004.4515 ± .004.2495 ± .004.2496 ± .004
GuidedBackprop (μ)-.0003 ± .003.0006 ± .001-.0017 ± .003.0011 ± .006
GuidedBackprop (L2).4499 ± .004.4515 ± .004.2495 ± .004.2496 ± .004
IntegratedGrads (μ).0700 ± .003.1453 ± .003.0886 ± .009.1187 ± .008
IntegratedGrads (L2).4451 ± .012.3909 ± .008.4168 ± .012.3955 ± .014
Activation (μ).4700 ± .000.4700 ± .000.2407 ± .001.2407 ± .001
Activation (L2).5688 ± .000.5688 ± .000.4820 ± .000.4820 ± .000
LIME.0398 ± .013.0593 ± .003.0049 ± .003.1077 ± .006
Occlusion.0815 ± .005.1382 ± .006.0037 ± .012.0213 ± .013
Shapley.2557 ± .044.2915 ± .038.0229 ± .008.0490 ± .009
+ +Table 17: Cross-lingual faithfulness results for the Urdu split of XNLI dataset: Scores are measured for different attribution methods on the XNLI dataset and averaged across models trained with different seeds for each architecture. Attribution calculations are performed with respect to the top prediction (TP) class and the loss. \ No newline at end of file diff --git a/amultilingualperspectivetowardstheevaluationofattributionmethodsinnaturallanguageinference/images.zip b/amultilingualperspectivetowardstheevaluationofattributionmethodsinnaturallanguageinference/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..fdd7e9cb58bad5528ff5d7b595dc30f110ef1db4 --- /dev/null +++ b/amultilingualperspectivetowardstheevaluationofattributionmethodsinnaturallanguageinference/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4f99fcd5fb4953ebef150cd78488b8db2287e4bf7d65e10d21273bb762960556 +size 1705183 diff --git a/amultilingualperspectivetowardstheevaluationofattributionmethodsinnaturallanguageinference/layout.json b/amultilingualperspectivetowardstheevaluationofattributionmethodsinnaturallanguageinference/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..81eb157b933af76cbae135daa54b1cdecce01932 --- /dev/null +++ b/amultilingualperspectivetowardstheevaluationofattributionmethodsinnaturallanguageinference/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3c7295ac6693b5229c2c97a466cabc0958e00efdb7a3481eb02928ed20c99f03 +size 583315 diff --git a/anadaptivelogicalruleembeddingmodelforinductivereasoningovertemporalknowledgegraphs/77d512d1-a154-468c-9b6d-2946a2b9e807_content_list.json b/anadaptivelogicalruleembeddingmodelforinductivereasoningovertemporalknowledgegraphs/77d512d1-a154-468c-9b6d-2946a2b9e807_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..b3686f3863df9df6ec49c40e589c7f5c740b2d94 --- /dev/null +++ b/anadaptivelogicalruleembeddingmodelforinductivereasoningovertemporalknowledgegraphs/77d512d1-a154-468c-9b6d-2946a2b9e807_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6beb66d79c3145b3e9a18fafd84668e5eb3400033275133bbd0ccadadbf9843f +size 90349 diff --git a/anadaptivelogicalruleembeddingmodelforinductivereasoningovertemporalknowledgegraphs/77d512d1-a154-468c-9b6d-2946a2b9e807_model.json b/anadaptivelogicalruleembeddingmodelforinductivereasoningovertemporalknowledgegraphs/77d512d1-a154-468c-9b6d-2946a2b9e807_model.json new file mode 100644 index 0000000000000000000000000000000000000000..2dba7d81d94014173ca10126fd711d5856bd79c8 --- /dev/null +++ b/anadaptivelogicalruleembeddingmodelforinductivereasoningovertemporalknowledgegraphs/77d512d1-a154-468c-9b6d-2946a2b9e807_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a92b24ec386f21641b5cf20613df5eef2bf925a20c9d67d78bef27fab96a76bb +size 109262 diff --git a/anadaptivelogicalruleembeddingmodelforinductivereasoningovertemporalknowledgegraphs/77d512d1-a154-468c-9b6d-2946a2b9e807_origin.pdf b/anadaptivelogicalruleembeddingmodelforinductivereasoningovertemporalknowledgegraphs/77d512d1-a154-468c-9b6d-2946a2b9e807_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..a003aeccaa2bbe874983390fcf2a22822bd2d0a1 --- /dev/null +++ b/anadaptivelogicalruleembeddingmodelforinductivereasoningovertemporalknowledgegraphs/77d512d1-a154-468c-9b6d-2946a2b9e807_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:340c02b297185316caa858fac82865e0739883550b48260b5a6b6a048c2606a7 +size 404988 diff --git a/anadaptivelogicalruleembeddingmodelforinductivereasoningovertemporalknowledgegraphs/full.md b/anadaptivelogicalruleembeddingmodelforinductivereasoningovertemporalknowledgegraphs/full.md new file mode 100644 index 0000000000000000000000000000000000000000..128b3a0f04fd0ab32d7768317fa76ba662c2e938 --- /dev/null +++ b/anadaptivelogicalruleembeddingmodelforinductivereasoningovertemporalknowledgegraphs/full.md @@ -0,0 +1,392 @@ +# An Adaptive Logical Rule Embedding Model for Inductive Reasoning over Temporal Knowledge Graphs + +Xin Mei*, Libin Yang*, Zuowei Jiang, Xiaoyan Cai† + +Northwestern Polytechnical University, Xi'an, China + +meixin@mail.nwpu.edu.cn, libiny@nwpu.edu.cn + +jiangzw@mail.nwpu.edu.cn, xiaoyanc@nwpu.edu.cn + +# Abstract + +Temporal knowledge graphs (TKGs) extrapolation reasoning predicts future events based on historical information, which has great research significance and broad application value. Existing methods can be divided into embedding-based methods and logical rule-based methods. Embedding-based methods rely on learned entity and relation embeddings to make predictions and thus lack interpretability. Logical rule-based methods bring scalability problems due to being limited by the learned logical rules. We combine the two methods to capture deep causal logic by learning rule embeddings, and propose an interpretable model for temporal knowledge graph reasoning called adaptive logical rule embedding model for inductive reasoning (ALRE-IR). ALRE-IR can adaptively extract and assess reasons contained in historical events, and make predictions based on causal logic. Furthermore, we propose a one-class augmented matching loss for optimization. When evaluated on ICEWS14, ICEWS0515 and ICEWS18 datasets, the performance of ALRE-IR outperforms other state-of-the-art baselines. The results also demonstrate that ALRE-IR still shows outstanding performance when transferred to related dataset with common relation vocabulary, indicating our proposed model has good zero-shot reasoning ability. $^{1}$ + +# 1 Introduction + +Knowledge graphs (KGs) are a form of structured human knowledge. They represent events as triples (subject, relation, object), where subject and object are entities. Entities usually are objects and abstract concepts in the real world, and relations represent relationships between entities. KGs have caused great research both in academia and industry (Dong + +et al., 2014; Nickel et al., 2015; Wang et al., 2017; Hogan et al., 2021), and have been widely used in many real-world applications including relation extraction (Min et al., 2013; Zeng et al., 2015), entity linking (Hua et al., 2015; Mendes et al., 2011), and question answering (Luo et al., 2018; Yih et al., 2015). However, most of the knowledge graphs are incomplete (Shi and Weninger, 2018; Toutanova and Chen, 2015), which affects their effectiveness and limits the performance of KG-based applications. Reasoning over KGs aims to infer new conclusions based on existing data and predict the missing events, which can effectively alleviate this problem. Traditional knowledge graphs contain only static events, and there is a large amount of available event data with temporal correlations, where entities interact differently over time. Therefore, many temporal knowledge graphs (TKGs) composed of entity interaction data with temporal attributes have emerged (Boschee et al., 2015; Gottschalk and Demidova, 2018, 2019).TKGs extend static triples with timestamp to represent dynamic events in the form of quadruples (subject, relation, object, timestamp), where timestamp represents valid time of static triple. Compared with traditional static KGs, TKGs have complex temporal dynamic characteristics, which increase difficulty of reasoning on TKGs. + +Reasoning over a TKG primarily has two settings, interpolation (Goel et al., 2020) and extrapolation (Trivedi et al., 2017). Given events within time interval $[t_0, t_T]$ , interpolation attempts to infer missing events that happened in $[t_0, t_T]$ , while extrapolation predicts future missing events for time $t > t_T$ . Extrapolation reasoning learns hidden connections between events from observed historical KGs and then predicts new events at future timestamps (Korkmaz et al., 2015; Muthiah et al., 2015; Phillips et al., 2017), which can be applied in practical scenarios such as disaster relief (Signorini et al., 2011) and financial analysis (Bollen + +et al., 2011). This paper focuses on extrapolation reasoning task. + +Recently, many research efforts have been put into extrapolation reasoning over TKGs and realize excellent prediction performance (Tao et al., 2021). These methods can be divided into two categories: embedding-based methods and logical rule-based symbolic methods. Embedding-based methods such as RE-Net (Jin et al., 2020), CyGNet (Zhu et al., 2021), TIE (Wu et al., 2021) and RE-GCN (Li et al., 2021) can capture complex information in TKG, but the black-box property of embeddings make them lack interpretability and are not suitable for many practical applications. Some researchers propose to create logical rules for reasoning to enhance credibility and utility, such as Streamlearner (Omran et al., 2019) and Tlogic (Liu et al., 2022). They employ statistical-based measures to assess confidence of rules and make predictions based on learned rules. However, the learned rules are limited, which makes the model have scalability problems and is not suitable for large-scale datasets in reality. + +To alleviate the above problems, we propose an adaptive logical rule embedding model for inductive reasoning (ALRE-IR) on temporal knowledge graphs. It can effectively capture deep structure of TKG and mine potential logical rules. Logical rules are represented by a sequence of relations. Therefore, relations are the core features we focus on when mining rules, and entities are just tools for extracting relation paths. First, we extract relation paths from historical subgraphs and learn embeddings of relation paths that contain historical semantics. We then match these relation paths with current events to obtain rules and assess confidence of the rules based on interpretable causal logic. Finally, the quadruple can be scored according to confidence of the rules. We design training tasks from a coarse-grained quadruple perspective and a fine-grained rule perspective, respectively, and propose a one-class augmented matching loss to optimize our proposed adaptive logical rule embedding model. During the inference process, our model can adaptively extract and learn relation path features based on historical information, assess the confidence of corresponding rules, and predict missing entities. Furthermore, our trained model can be applied to new datasets with a common relation vocabulary for zero-shot reasoning. + +In summary, this paper makes the following four + +contributions: + +(1) An interpretable temporal knowledge graph reasoning method is developed, which can perform effective inductive reasoning. +(2) An adaptive logical rule embedding model is proposed, which can autonomously extract and assess rules based on historical features. +(3) A one-class augmented matching loss is designed to train the model from a coarse-grained quadruple perspective and a fine-grained rule perspective, respectively. +(4) Thorough experimental studies are conducted, and experimental results show that our proposed ALRE-IR model outperforms state-of-the-art baselines. + +# 2 Related Work + +# 2.1 Static Knowledge Graph (KG) Reasoning + +Common static KG reasoning models mainly focus on knowledge representation learning, that is, learning low-dimensional vector representations of entities and relations. These models are mainly divided into three categories: translation based models, semantic matching based models, and neural network based models. Translation based models regard the relation as a translation vector from a subject entity to an object entity, such as TransE (Bordes et al., 2013), TransH (Wang et al., 2014), and TransR (Lin et al., 2015). Semantic matching based models (e.g. ComplEx (Trouillon et al., 2016), DistMult (Yang et al., 2015) and RotatE (Sun et al., 2019)), assume that the score of a triple can be factorized into several tensors, and use triangular norm to measure the rationality of facts. Neural network based models use deep neural networks to learn network embeddings. For example, ConvE (Dettmers et al., 2018) and ConvKB (Nguyen et al., 2018) use convolutional neural networks to learn interactions between entities and relation. In addition, some models utilize graph neural networks which have outstanding performance in graph representation learning to embed KG, such as R-GCN (Schlichtkrull et al., 2018), A2N (Bansal et al., 2019), and RGHAT (Zhang et al., 2020). + +# 2.2 Temporal Knowledge Graph (TKG) Reasoning + +TKG reasoning can have two settings: extrapolation reasoning and interpolation reasoning. For interpolation reasoning, researchers complete missing events in past timestamps by adding temporal + +![](images/aa8a29ee2cb8f9f887f0c8ddb3ad2c7d6f4e291e8d214d1d76540d31724a7e36.jpg) +(a) subgraph + +![](images/ea52c4cae2419361391f90a0addec5e628cd37055404731d9d7d801528a4bfa7.jpg) +(b)paths + +![](images/dc911011061f048e7e5e156d3d9a9069106a7355527d320298d4b7d639c25d9d.jpg) +(c) rules +Figure 1: Rule extraction based on relation paths. + +information to the static KG representation learning method. TTransE (Leblay and Chekol, 2018) is an extension of TransE, which embeds temporal information into score function. HyTE (Dasgupta et al., 2018) improves TransH by replacing the unit normal vector of the hyperplane projection with the normal vector related to timestamp. TA-DistMult (Garcia-Duran et al., 2018) uses recurrent neural networks to embed time into relation embeddings. + +Unlike interpolation reasoning, extrapolation reasoning predicts new events in the future based on historical facts. Existing methods for extrapolating reasoning can be divided into embedding-based methods and logical rule-based methods. Embedding-based methods include RE-NET (Jin et al., 2020), CyGNet (Zhu et al., 2021), RE-GCN (Li et al., 2021) and xERTE (Han et al., 2021). They capture temporal information either by learning embeddings for each timestamp, or learning evolutionary embeddings of entities and relations over time. Although these methods can capture complex features, they rely on trained embeddings and cannot make inductive predictions for events containing new entities, relations, or timestamps. Recently, some researchers have proposed logical rule-based interpretable methods, such as AnyBURL (Meilicke et al., 2020), StreamLearner (Omran et al., 2019) and TLogic (Liu et al., 2022). These methods mine logical rules from datasets through random walks, and design measures to assess confidence of candidate logical rules. Therefore, quality of the logical rules they learn depends largely on the measure chosen. Furthermore, these methods apply learned rules for reasoning and cannot adapt to new patterns of logical rules. + +# 3 Preliminaries + +Temporal Knowledge Graph (TKG). A TKG consists of dynamic events, an event is represented in the form of a quadruple $(s, r, o, t)$ consisting of a subject entity $s \in E$ , a relation $r \in \Upsilon$ , an object entity $o \in E$ and a timestamp $t \in \Gamma$ . $E$ and $\Upsilon$ denote entity set and relation set respectively, and $\Gamma$ represents the set of timestamps. + +Link Prediction. Given a missing temporal quadruple (event), link prediction aims to infer the missing part, such as predicting object entity given $(s,r,?,t)$ or predicting subject entity given $(?,r,o,t)$ or predicting relation given $(s,?,o,t)$ . For each quadruple, the training objective function is optimized to make the correct quadruple score higher than the incorrect quadruple, it is generally defined as a score function $g(s,r,o,t) \in R$ . + +Temporal logical rules. We predict future events by mining logical causal relationships between events. As shown in Figure 1, we take an event $(e_1, r_8, e_5, t)$ as an example, and mine temporal logical rules contained in it according to the historical information in the previous $m$ timestamps. Figure 1(a) represents a subgraph composed of events that occurred in the previous $m$ timestamps. Based on this subgraph, we mine all rules that might lead to the current event. Figure 1(b) shows all paths from $e_1$ to $e_5$ , we can extract relations to get four possible logical rules in Figure 1(c). Each rule $R^t(p, r)$ consists of a path $p \in P_{(s,o)}^t$ and a relation $r \in \Upsilon$ , $p$ represents the historical reason and $r$ represents the result at the current timestamp. + +# 4 Method + +We propose an interpretable model for temporal knowledge graph reasoning called adaptive logi + +![](images/4e1828648126d36599c77a91b912ef81b62face0a674f0d4329a11e9bc223c08.jpg) +Figure 2: Architecture of the proposed model. + +cal rule embedding model for inductive reasoning (ALRE-IR). It uses relation paths to represent logical rules implicit in the knowledge graph, and captures complex semantic features in the knowledge graph by learning embeddings of relation paths. It can adaptively extract possible rules, learn rules' embeddings, score rules based on interpretable causal logic, and finally make predictions based on relative confidence of rules. Our inductive reasoning model is composed of three parts as follows: + +Encoding, which walks out all historical relation paths for each input quadruple, learns embeddings of all relation paths. + +Decoding, which scores quadruples according to all the temporal logical rules associated with them. + +Training, which proposes a one-class augmented matching loss to optimize the model, so that the model can adaptively learn reasonable logical rules. + +The overall architecture of the model is shown in Figure 2, details of the model will be elaborated as follows. + +# 4.1 Encoding + +Existing representation learning based temporal knowledge reasoning methods make predictions by learning evolutionary embeddings of entities. We propose to infer missing links according to causal logical rules, and encode historical information to find the reason that leads to the event. In this part, we mine relation logical rules contained in historical subgraphs, and learn embeddings of relation paths. + +# 4.1.1 Relation Paths Extraction + +We take entities as nodes and relations as edges, and construct a relation graph according to all events from timestamp $t - m$ to $t - 1$ . For events $(s,r,o,t)$ , we take the $k$ -hop neighbors around node $s$ and node $o$ respectively to get two subgraphs, and take the intersection of the two subgraphs. Then we remove independent nodes and nodes whose distance from node $s$ or node $o$ is greater than $k$ . By doing this, we can obtain a subgraph containing all paths between node $s$ and node $o$ whose length does not exceed $k + 1$ . After that, we extract all paths between node $s$ and node $o$ on the subgraph, and remove entities to get relation paths. + +# 4.1.2 Relation Paths Embedding + +We learn relation path sequence embeddings to capture logical semantics implied in the relation paths, which reflect spatial logical correlation of the two entities. + +In our approach, we exploit Gated Recurrent Unit (GRU), a popular variant of RNNs, to capture features of relation paths. Gated recurrent neural networks (Gated RNNs) have been successfully applied in processing data with sequence characteristics, i.e., data that conform to temporal, logical, or other orderings. RNNs can capture relationships between sequential data, and mine sequential information and semantic information in the data. The reason why RNNs can solve the sequence problem is that it can remember information at each moment. The hidden layer at each moment is not only determined by the input layer at this moment, but also by the output of the hidden layer at the + +previous moment. A simple recurrent unit can be represented as: + +$$ +\mathbf {h} _ {t} = f (\mathbf {W} \cdot \mathbf {x} _ {t} + \mathbf {U} \cdot \mathbf {h} _ {t - 1} + \mathbf {b}) \tag {1} +$$ + +where $\mathbf{x}_t$ is the input vector at timestamp $t$ , $\mathbf{h}_t$ is the hidden states at timestamp $t$ , $\mathbf{W}$ and $\mathbf{U}$ are two trainable weight parameters, and $f(\cdot)$ is an activation function. + +The relation paths we extract are of variable length, and GRU can easily process such sequences and get embeddings for each path. GRU controls the flow of information through two learnable gates, called update gate and reset gate. The update gate controls historical memory information to be retained, and the reset gate controls information to be forgotten. The specific formula of GRU model is as follows: + +$$ +\mathbf {z} _ {t} = \sigma \left(\mathbf {W} _ {z} \cdot \mathbf {x} _ {t} + \mathbf {U} _ {z} \cdot \mathbf {h} _ {t - 1} + \mathbf {b} _ {z}\right) \tag {2} +$$ + +$$ +\mathbf {r} _ {t} = \sigma \left(\mathbf {W} _ {r} \cdot \mathbf {x} _ {t} + \mathbf {U} _ {r} \cdot \mathbf {h} _ {t - 1} + \mathbf {b} _ {r}\right) \tag {3} +$$ + +$$ +\tilde {\mathbf {h}} _ {t} = \tanh \left(\mathbf {W} \cdot \mathbf {x} _ {t} + \mathbf {U} \cdot \left(\mathbf {r} _ {t} \odot \mathbf {h} _ {t - 1}\right) + \mathbf {b}\right) \tag {4} +$$ + +$$ +\mathbf {h} _ {t} = \mathbf {z} _ {r} \odot \tilde {\mathbf {h}} _ {t} + (1 - \mathbf {z} _ {r}) \odot \mathbf {h} _ {t - 1} \tag {5} +$$ + +where $\mathbf{z}_t$ is the update gate and $\mathbf{r}_t$ is the reset gate. $\mathbf{h}_{(t - 1)}$ represents the hidden state at timestamp $t - 1$ , which acts as the neural network memory, containing information of the previous input. $\sigma$ is the sigmoid function. + +# 4.2 Decoding + +# 4.2.1 Confidence Estimation of rules + +Relation paths extracted based on events $(s, r, o, t)$ represent possible "reasons" of events. According to the "result" $r$ , we find reasonable ones from these reasons, i.e. find possible matching rules. + +For an event $(s,r,o,t)$ , we get all relation paths between the subject entity $s$ and object entity $o$ , and the corresponding embeddings $\mathbf{P}_{(s,o)}^t$ . Taking the path $p_i \in P_{(s,o)}^t$ as "reason" and relation $r$ as "result", a rule $R^t(p,r)$ is obtained. Then, we estimate the rule confidence by capturing the interaction between path $p_i$ and relation $r$ . We define two confidence estimation functions as: + +# Similarity matching: + +$$ +f \left(p _ {i}, r\right) = \cos \left(\mathbf {p} _ {i}, \mathbf {r}\right) \tag {6} +$$ + +where $\cos$ represents cosine similarity. This function measures interaction score of path and relation using cosine similarity. + +# Concatenation combination: + +$$ +f \left(p _ {i}, r\right) = \sigma \left(\mathbf {W} \left(\mathbf {p} _ {i} \| \mathbf {r}\right)\right) \tag {7} +$$ + +where $\sigma$ is the sigmoid function, $\parallel$ represents concatenation operation. This function concatenates path and relation embeddings, capturing their interaction score with a linear projection, and $\mathbf{W}$ represents a projection matrix. + +# 4.2.2 Score function + +We predict the missing quadruple $(s,r,?,t)$ based on the learned path embeddings. For a candidate target entity $o$ , the corresponding quadruple is $(s,r,o,t)$ . The score function is defined as: + +$$ +g (s, r, o, t) = \max _ {\mathbf {p} _ {i} \in \mathbf {P} _ {(s, o)} ^ {t}} f (\mathbf {p} _ {i}, r) \tag {8} +$$ + +Taking entity $s$ as the starting node and entity $o$ as the ending node, we extract relation paths that represent all possible reasons of the current event. Combine these paths with relation $r$ to generate a set of rules. The rule with the highest confidence indicates that the relation path in this rule is the most reasonable "reason" for the current quadruple, and we use confidence of this rule as the score of the quadruple. + +# 4.3 Training + +In this subsection we introduce an objective function for training. For relation $r$ , we train the model to find matching paths in historical events (correct rules). However, the biggest difficulty is that we do not know which path-relation pair is matched, i.e., there is no correct rule for training. To this end, we design training tasks from a coarse-grained quadruple perspective and a fine-grained rule perspective, respectively, and propose a one-class augmented matching loss. + +# 4.3.1 Training from quadruple perspective + +From the quadruple perspective, similar to embedding-based methods, we design a main loss function according to the quadruple score, in order to let the correct quadruple score higher and the wrong quadruple score lower. The loss function is soft-margin loss as: + +$$ +L _ {1} = \sum_ {(s, r, o, t) \in Q \bigcup Q ^ {\prime}} \log (1 + \exp (l \cdot g (s, r, o, t))) \tag {9} +$$ + +$$ +l = \left\{ \begin{array}{c c} 1, & (s, r, o, t) \in Q \\ - 1, & (s, r, o, t) \in Q ^ {\prime} \end{array} \right. \tag {10} +$$ + +where $Q$ is the set of valid quadruples, and $Q^{\prime}$ denotes the set of invalid triples as $Q^{\prime} = \{(s,r,o^{\prime},t)|o^{\prime}\in E - o\}$ . + +![](images/83e6623fd6b2911f2143d76c67fc421dc3069fb3dd7d33f22579f92a24eaa62e.jpg) + +![](images/d8e05813a7d30d8366a96df65b5316c9e4a748f88af38559277d394eed745d4d.jpg) +Figure 3: Training from quadruple perspective. + +From the decoder, we know that score of the quadruple is determined by the rule with highest confidence it contains. + +We find that all the rules drawn from the wrong quadruple must be wrong, but we cannot determine which rules drawn from the correct quadruple are correct. As shown in Figure 3(a), the training task is to make the correct quadruple score higher, that is, the soft positive rule with the highest score is regarded as a positive example to obtain a higher confidence. Similarly, for the wrong quadruple, the hard negative rule with the highest confidence is regarded as a negative example to obtain a lower confidence. From the view of matching reason path and result relation $r$ , this task is to make the path in the positive example close to the relation $r$ , and the path in the negative example away from the relation $r$ , as shown in Figure 3(b). In short, this training task is to make the soft positive rules have higher confidence, make the hard negative rules that are easy to misjudgment have lower confidence, and ignore other negative rules and uncertain rules. + +# 4.3.2 Training from rule perspective + +From the fine-grained rules perspective, we add another auxiliary training task to handle rules ignored in the main task. Inspired by one-class problem, we train the model only with negative samples. By negative sampling, we can obtain a sufficient number of negative quadruplets, each of which contains multiple negative rules that can be determined to be negative. As shown in Figure 4, the relative confidence of possible positive rules is increased by decreasing confidence of negative rules, thereby improving prediction accuracy of the model. + +![](images/e82602340c92956427691ca95c482143e09947ec28e1557f52de22addd7048aa.jpg) +Figure 4: Training from rule perspective. + +If the similarity matching function is applied to estimate confidence of the rule, the loss function here is defined as the cosine loss: + +$$ +L _ {2} = \sum_ {(s, r, o, t) \in Q \bigcup Q ^ {\prime}} \operatorname {c o s l o s s} (p, r) \tag {11} +$$ + +$$ +\operatorname {c o s l o s s} (p, r) = \left\{ \begin{array}{l l} 1 - \cos (\mathbf {p}, \mathbf {r}), & y = 1 \\ \max (0, \cos (\mathbf {p}, \mathbf {r})), & y = - 1 \end{array} \right. \tag {12} +$$ + +If the concatenation combination function is applied, the loss function is the soft-margin loss: + +$$ +L _ {2} = \sum_ {(s, r, o, t) \in Q \bigcup Q ^ {\prime}} \log (1 + \exp (l \cdot g (s, r, o, t) \tag {13} +$$ + +Then the overall one-class augmented matching loss is defined as: + +$$ +L = \alpha L _ {1} + (1 - \alpha) L _ {2} \tag {14} +$$ + +where $\alpha \in [0,1]$ + +# 4.4 Inference + +Our method can directly use the trained model to extract historical features and predict missing entity without complex rule application process. First, we select candidate entities (all entities reachable within $k$ hops with $s$ as the source entity node) for the query $(s,r,?,t)$ , and generate candidate quadruples based on the candidate entities. Then, we apply the trained encoder to extract relation paths for the query, form rules and assess their confidence. Finally, we score quadruples according to the confidence of rules. The candidate entity corresponding to the quadruple with the highest score is the predicted target entity. + +# 5 Experiment + +# 5.1 Datasets + +We conduct experiments on Integrated Crisis Early Warning System $^2$ (ICEWS) dataset. ICEWS commonly used for temporal knowledge graph link + +
DataEntitiesRelationsTrainingValidationTestTime Granules
ICEWS147,12823063,68513,82313,222365
ICEWS1823,033256539,28667,53863,110304
ICEWS051510,488251272,11517,53520,4664,017
+ +Table 1: Statistics of the datasets. + +
MethodICEWS14ICEWS18ICEWS0515
MRRHits@1Hits@3Hits@10MRRHits@1Hits@3Hits@10MRRHits@1Hits@3Hits@10
DistMult0.27670.18160.31150.46960.10170.04520.10330.21250.28730.19330.32190.4754
ComplEx0.30840.21510.34480.49580.21010.11870.23470.39870.31690.21440.35740.5204
AnyBURL0.29670.21260.33330.46730.22770.15100.25440.38910.32050.23720.35450.5046
TTransE0.13430.03110.17320.34550.08310.01920.08560.21890.15710.05000.19720.3802
TA-DistMult0.26470.17090.30220.45410.16750.08610.18410.33590.24310.14580.27920.4421
DE-SimplE0.32670.24430.35690.49110.19300.11530.21860.34800.35020.25910.38990.5275
TNTComplEx0.32120.23350.36030.49130.21230.13280.24020.36910.27540.19520.30800.4286
CyGNet0.32730.23690.36310.50670.24930.15900.28280.42610.34970.25670.39090.5294
RE-NET0.38280.28680.41340.54520.28810.19050.32440.47510.42970.31260.46850.6347
xERTE0.40790.32700.45670.57300.29310.21030.33510.46480.46620.37840.52310.6392
TLogic0.43040.33560.48270.61230.29820.20540.33950.48530.46970.36210.53130.6743
RE-GCN0.44350.33510.50810.63160.34840.23090.39830.58160.49230.38240.55710.7054
ALRE-IR0.54010.42790.61160.71790.38410.25660.43720.61000.60180.48970.67770.7750
ALRE-IR w/i CE0.63840.53800.70910.79070.45370.37780.48100.66610.64790.55880.70480.7803
+ +Table 2: Performance comparison for entity prediction. + +prediction, which contains international event information. We select three subsets in ICEWS dataset, namely ICEWS0515, which contains data from 2005 to 2015, ICEWS14 which contains data in 2014, and ICEWS18 which contains data in 2018. We divide each dataset into training set, validation set and test set, and for fair comparison, we used the data splits provided by Liu et al. (2022). Table 1 provides statistics of all datasets used. + +# 5.2 Baselines + +To demonstrate effectiveness of our proposed ALRE-IR model, we compare experimental results with a wide selection of static models and temporal models. + +Static models. We select some static knowledge graph representation learning models that ignore time information, including DistMult (Yang et al., 2015), ComplEX (Trouillon et al., 2016), and AnyBURL (Meilicke et al., 2020). + +Temporal models. We also compare some temporal reasoning models of knowledge graphs, including TTransE (Leblay and Chekol, 2018), DE-SimplE (Goel et al., 2020), TNTComplEx (Lacroix et al., 2020), TA-DistMult (Garcia-Duran et al., 2018), RE-NET (Jin et al., 2020), CyGNet (Zhu et al., 2021), xERTE (Han et al., 2021), TLogic (Liu et al., 2022) and RE-GCN (Li et al., 2021). For RE-GCN (Li et al., 2021), we reproduce the experiments. And for other baselines, we list the results reported in TLogic (Liu et al., 2022). + +Due to time urgency of news events, related + +events may appear on the same day. Datasets that roughly label temporal information in units of days hinder the model's ability to extract historical information. Therefore, we also try to use partial events in current timestamp to provide some hints during inference, named as ALRE-IR $w / i$ CE. We sort events within the same timestamp according to the order in which they appear in the dataset, mask the later events, and use the earlier events as known historical events to assist in reasoning. + +# 5.3 Results + +Experimental results are shown in Table 2. All models perform best on ICEWS0515, followed by ICEWS14, and the worst on ICEWS18. It can be seen from Table 1 that the number of entities and events in ICEWS18 dataset is large, so the TKG structure formed is complex and dense, which brings a lot of noise to inference. In contrast, the TKG composed of ICEWS0515 is easier to handle. + +For comparison models, three static inference models DistMult, ComplEX, and AnyBURL do not consider temporal information and thus perform worst. TTransE, DE-SimplE, TNTComplEx and TA-DistMult are interpolation inference models, which cannot handle events in future timestamps and perform poorly. RE-NET and CyGNet fail to make predictions on entities which do not exist in the training set. xERTE achieves better performance than RE-NET and CyGNet, since it extracts historical subgraph according to the query and performs attention propagation to reason on the sub + +
ModelALRE-IRRE-GCNALRE-IRRE-GCN
TestICEWS14ICEWS0515
TrainICEWS14ICEWS0515ICEWS14ICEWS0515ICEWS14ICEWS0515
MRR0.54010.50560.44350.60180.58670.4923
Hits@10.42790.38940.33510.48970.46500.3824
Hits@30.61160.57570.50810.67770.66590.5571
Hits@100.71790.69170.63160.77500.79200.7054
+ +Table 3: Zero-shot reasoning where rules learned on train dataset are transferred and applied to test dataset. + +graph. The two best performing models are logical rule-based TLogic and embedding-based RE-GCN. Our proposed ALRE-IR outperforms the above two models on all datasets. The measure used in TLogic to assess the confidence of logical rules are designed based on statistical methods. Instead, we use the learned path embeddings to evaluate the rule confidence according to causal logic. The distance between embedding vectors can well reflect the similarity between paths and relations. RE-GCN leverages graph convolutional networks to learn evolutionary representations of entities and relations, achieving better performance than TLogic. Both TLogic and our proposed ALRE-IR can transfer the trained model for inductive reasoning on datasets with common relation vocabulary, but RE-GCN cannot do this. + +Logical rule-based methods can effectively predict events that contain rules mined from the training set, but are slightly less effective for unseen rules. As shown in Table 2, hits@1 of RE-GCN on ICEWS14 is slightly lower than TLogic, but Hits@3 and Hits@10 are higher than TLogic. Embedding-based RE-GCN learns entity evolutionary representations and predicts events based on distances between vector representations. The learned vector representations can well reflect the latent relationships between entities, so that correct quadruples can get higher scores. But they ignore the important logical relationships contained in the knowledge graph, making it difficult to obtain accurate prediction. Our proposed model combines advantages of the two methods, learns causal path embedding to mine underlying logic, improves the model's accurate prediction ability, and enhances robustness to unseen rules, thereby achieving better performance. + +Furthermore, the outstanding performance of ALRE-IR $w / i$ CE shows that there is a strong correlation between events occurring within the same timestamp, which can assist the model in making real-time predictions. + +![](images/476125879180cd1c29c086207501b9445e3eafb23604a0cdaa97e5cffff271fc.jpg) +Figure 5: Result on ICEWS14 dataset under different scales of training samples. + +# 5.4 Zero-shot reasoning + +Our proposed ALRE-IR model can be transferred to any new dataset that shares a common relationship with the training dataset for zero-shot reasoning. To evaluate the zero-shot reasoning performance of ALRE-IR, we conduct experiments on ICEWS0515 and ICEWS14, and the results are shown in Table 3. The ALRe-IR model trained on ICEW0515 is applied to ICEWS14 for reasoning. The prediction performance is slightly worse than the ALRE-IR model trained on ICEWS14, but still better than the best baseline RE-GCN. Similar performance is achieved when the model is trained on ICEWS14 and tested on ICEW0515. + +# 5.5 Proportions of the training data + +We evaluate performance of the proposed ALRE-IR model under different scales of training samples, and the results are shown in Figure 5. We divide the training set by timestamp and evaluate model's performance when trained with events within the previous 50, 100, 150, 200, and 261 timestamps (full training set), respectively. When trained with the events only within the previous 50 timestamps, the model underfitted and performed poorly. But when we train the model with the previous 100 timestamps events, it achieves similar performance to the + +model trained with the full training set. It shows that our model can achieve good performance with only a small number of training samples. + +# 5.6 Error analysis + +To analyze errors of our proposed model in TKG reasoning, we randomly sample 100 inaccurately predicted test quadruples and summarize three types of errors. (1) Same relation paths: The model scores the quadruples based on similarity between the historical relationship path and the current relationship. When the same reasonable historical relationship path is mined for different quadruples, they will obtain the same score. (2) Time insensitivity: Due to insufficient use of temporal information, time of the event occurrence is incorrectly predicted, that is, future events are predicted at the current moment. (3) Immediate response events: Urgent related events may occur on the same day. Time interval of events in the dataset is one day, which prevents us from capturing key historical events that occurred on the same day. + +Due to space limitations, we report more results including implementation, detailed analysis and case study in Appendices. + +# 6 Conclusion + +We propose an interpretable model for temporal knowledge graph reasoning called ALRE-IR. It can autonomously extract and assess rules based on historical features, and make prediction with rules. We design training tasks from a coarse-grained quadruple perspective and a fine-grained rule perspective, respectively, and propose a one-class augmented matching loss for optimization. ALRE-IR can be transferred to perform zero-shot reasoning on any new dataset with common relation vocabulary. Experimental results demonstrate that our proposed ALRE-IR performs better than the state-of-the-art baselines. + +# Limitations + +Although our proposed ALRE-IR model has shown better performance than state-of-the-art baselines, there are some limitations. We do not fully consider temporal information when mining logical rules. We only focus on logical causality of historical paths and current events, while ignoring specific time when the "result" event occurred. Regarding this issue, we consider adding temporal information in path encoding, taking time difference + +between the relation edge in the path and current relation edge as temporal feature, and encoding both the semantic feature and temporal feature to improve prediction performance. + +# Acknowledgement + +This work was supported in part by the National Natural Science Foundation of China under Grants 61872296, 61772429, and U20B2065, and in part by the MOE (Ministry of Education in China) Project of Humanities and Social Sciences under Grant18YJC870001. + +# References + +Trapit Bansal, Da-Cheng Juan, Sujith Ravi, and Andrew McCallum. 2019. A2n: Attending to neighbors for knowledge graph inference. In Proceedings of the 57th annual meeting of the association for computational linguistics, pages 4387-4392. +Johan Bollen, Huina Mao, and Xiaojun Zeng. 2011. Twitter mood predicts the stock market. Journal of computational science, 2(1):1-8. +Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data. Advances in neural information processing systems, 26. +Elizabeth Boschee, Jennifer Lautenschlager, Sean O'Brien, Steve Shellman, James Starz, and Michael Ward. 2015. Icews coded event data. Harvard Data-verse, 12. +Shib Sankar Dasgupta, Swayambhu Nath Ray, and Partha Talukdar. 2018. Hyte: Hyperplane-based temporally aware knowledge graph embedding. In Proceedings of the 2018 conference on empirical methods in natural language processing, pages 2001-2011. +Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. 2018. Convolutional 2d knowledge graph embeddings. In Proceedings of the AAAI conference on artificial intelligence, volume 32. +Xin Dong, Evgeniy Gabrilovich, Geremy Heitz, Wilko Horn, Ni Lao, Kevin Murphy, Thomas Strohmann, Shaohua Sun, and Wei Zhang. 2014. Knowledge vault: A web-scale approach to probabilistic knowledge fusion. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 601-610. +Alberto Garcia-Duran, Sebastijan Dumančić, and Mathias Niepert. 2018. Learning sequence encoders for temporal knowledge graph completion. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4816-4821. + +Rishab Goel, Seyed Mehran Kazemi, Marcus Brubaker, and Pascal Poupart. 2020. Diachronic embedding for temporal knowledge graph completion. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 3988-3995. +Simon Gottschalk and Elena Demidova. 2018. Eventk: a multilingual event-centric temporal knowledge graph. In European Semantic Web Conference, pages 272-287. Springer. +Simon Gottschalk and Elena Demidova. 2019. Eventkg—the hub of event knowledge on the web—and biographical timeline generation. Semantic Web, 10(6):1039-1070. +Zhen Han, Peng Chen, Yunpu Ma, and Volker Tresp. 2021. Explainable subgraph reasoning for forecasting on temporal knowledge graphs. In 9th International Conference on Learning Representations, ICLR. +Aidan Hogan, Eva Blomqvist, Michael Cochez, Claudia d'Amato, Gerard de Melo, Claudio Gutierrez, Sabrina Kirrane, Jose Emilio Labra Gayo, Roberto Navigli, Sebastian Neumaier, et al. 2021. Knowledge graphs. Synthesis Lectures on Data, Semantics, and Knowledge, 12(2):1-257. +Wen Hua, Kai Zheng, and Xiaofang Zhou. 2015. Microblog entity linking with social temporal context. In Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data, pages 1761-1775. +Woojeong Jin, Meng Qu, Xisen Jin, and Xiang Ren. 2020. Recurrent event network: Autoregressive structure inference over temporal knowledge graphs. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6669-6683. +Gizem Korkmaz, Jose Cadena, Chris J Kuhlman, Achla Marathe, Anil Vullikanti, and Naren Ramakrishnan. 2015. Combining heterogeneous data sources for civil unrest forecasting. In Proceedings of the 2015 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2015, pages 258-265. +Timothee Lacroix, Guillaume Obozinski, and Nicolas Usunier. 2020. Tensor decompositions for temporal knowledge base completion. In 8th International Conference on Learning Representations. +Julien Leblay and Melisachew Wudage Chekol. 2018. Deriving validity time in knowledge graph. In *Companion Proceedings of the The Web Conference* 2018, pages 1771-1776. +Zixuan Li, Xiaolong Jin, Wei Li, Saiping Guan, Jiafeng Guo, Huawei Shen, Yuzhuo Wang, and Xueqi Cheng. 2021. Temporal knowledge graph reasoning based on evolutionary representation learning. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 408-417. + +Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015. Learning entity and relation embeddings for knowledge graph completion. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, pages 2181-2187. +Yushan Liu, Yunpu Ma, Marcel Hildebrandt, Mitchell Joblin, and Volker Tresp. 2022. Tlogic: Temporal logical rules for explainable link forecasting on temporal knowledge graphs. Proceedings of the AAAI Conference on Artificial Intelligence. +Kangqi Luo, Fengli Lin, Xusheng Luo, and Kenny Zhu. 2018. Knowledge base question answering via encoding of complex query graphs. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2185-2194. +Christian Meilicke, Melisachew Wudage Chekol, Manuel Fink, and Heiner Stuckenschmidt. 2020. Reinforced anytime bottom up rule learning for knowledge graph completion. arXiv preprint arXiv:2004.04412. +Pablo N Mendes, Max Jakob, Andres Garcia-Silva, and Christian Bizer. 2011. Dbpedia spotlight: shedding light on the web of documents. In Proceedings of the 7th international conference on semantic systems, pages 1-8. +Bonan Min, Ralph Grishman, Li Wan, Chang Wang, and David Gondek. 2013. Distant supervision for relation extraction with an incomplete knowledge base. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 777-782. +Sathappan Muthiah, Bert Huang, Jaime Arredondo, David Mares, Lise Getoor, Graham Katz, and Naren Ramakrishnan. 2015. Planned protest modeling in news and social media. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence. Citeseer. +Dai Quoc Nguyen, Tu Dinh Nguyen, Dat Quoc Nguyen, and Dinh Phung. 2018. A novel embedding model for knowledge base completion based on convolutional neural network. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 327-333. +Maximilian Nickel, Kevin Murphy, Volker Tresp, and Evgeniy Gabrilovich. 2015. A review of relational machine learning for knowledge graphs. Proceedings of the IEEE, 104(1):11-33. +Pouya Ghiasnezhad Omran, Kewen Wang, and Zhe Wang. 2019. Learning temporal rules from knowledge graph streams. In AAAI Spring Symposium: Combining Machine Learning with Knowledge Engineering. + +Lawrence Phillips, Chase Dowling, Kyle Shaffer, Nathan Hodas, and Svitlana Volkova. 2017. Using social media to predict the future: A systematic literature. arXiv preprint arXiv:1706.06134. +Michael Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. 2018. Modeling relational data with graph convolutional networks. In European semantic web conference, pages 593-607. Springer. +Baoxu Shi and Tim Weninger. 2018. Open-world knowledge graph completion. In Proceedings of the AAAI conference on artificial intelligence, volume 32. +Alessio Signorini, Alberto Maria Segre, and Philip M Polgreen. 2011. The use of twitter to track levels of disease activity and public concern in the us during the influenza a h1n1 pandemic. *PloS one*, 6(5):e19467. +Zhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, and Jian Tang. 2019. Rotate: Knowledge graph embedding by relational rotation in complex space. In 7th International Conference on Learning Representations. +Ye Tao, Ying Li, and Zhonghai Wu. 2021. Temporal link prediction via reinforcement learning. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 3470-3474. IEEE. +Kristina Toutanova and Danqi Chen. 2015. Observed versus latent features for knowledge base and text inference. In Proceedings of the 3rd workshop on continuous vector space models and their compositionality, pages 57-66. +Rakshit Trivedi, Hanjun Dai, Yichen Wang, and Le Song. 2017. Know-evolve: Deep temporal reasoning for dynamic knowledge graphs. In international conference on machine learning, pages 3462-3471. PMLR. +Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In International conference on machine learning, pages 2071-2080. PMLR. +Quan Wang, Zhendong Mao, Bin Wang, and Li Guo. 2017. Knowledge graph embedding: A survey of approaches and applications. IEEE Transactions on Knowledge and Data Engineering, 29(12):2724-2743. +Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph embedding by translating on hyperplanes. In Proceedings of the AAAI conference on artificial intelligence, volume 28. +Jiapeng Wu, Yishi Xu, Yingxue Zhang, Chen Ma, Mark Coates, and Jackie Chi Kit Cheung. 2021. Tie: A framework for embedding-based incremental temporal knowledge graph completion. In Proceedings of the 44th International ACM SIGIR Conference on + +Research and Development in Information Retrieval, SIGIR '21, page 428-437. +Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2015. Embedding entities and relations for learning and inference in knowledge bases. In 3rd International Conference on Learning Representations. +Scott Wen-tau Yih, Ming-Wei Chang, Xiaodong He, and Jianfeng Gao. 2015. Semantic parsing via staged query graph generation: Question answering with knowledge base. In Proceedings of the Joint Conference of the 53rd Annual Meeting of the ACL and the 7th International Joint Conference on Natural Language Processing of the AFNLP. +Daojian Zeng, Kang Liu, Yubo Chen, and Jun Zhao. 2015. Distant supervision for relation extraction via piecewise convolutional neural networks. In Proceedings of the 2015 conference on empirical methods in natural language processing, pages 1753-1762. +Zhao Zhang, Fuzhen Zhuang, Hengshu Zhu, Zhiping Shi, Hui Xiong, and Qing He. 2020. Relational graph neural network with hierarchical attention for knowledge graph completion. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 9612-9619. +Cunchao Zhu, Muhao Chen, Changjun Fan, Guangquan Cheng, and Yan Zhang. 2021. Learning from history: Modeling temporal knowledge graphs with sequential copy-generation networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 4732-4740. + +# A Appendix + +# A.1 Implementations + +We randomly initialize relation embeddings with dimension of 200. The initial hyperparameter of $\alpha$ is set to 0.5 and increases by 0.1 every training epoch until it reaches 1. The maximum length $k$ of the rules is set as 3, and the optimal historical event intervals $m$ on the ICEWS0515, ICEWS14 and ICEWS18 datasets are set to 5, 3, and 3, respectively. We use Adam optimizer to optimize all parameters, and the initial learning rate is set as 0.001. We use early stopping to avoid overfitting. We train the model for 200 epochs and stop training if the validation loss does not decrease for 10 consecutive epochs. + +We adopt a time-aware filtering strategy (Han et al., 2021) to filter out the quadruples valid at current timestamp among the candidate negative quadruples. When extracting rules, we treat the knowledge graph as an undirected graph. MRR, Hit@1, Hit@3 and Hit@10 are employed as the metrics. + +
MethodICEWS14ICEWS0515
MRRHits@1Hits@3Hits@10MRRHits@1Hits@3Hits@10
ALRE-IR w/i SM0.54010.42790.61160.71790.60180.48970.67770.7750
ALRE-IR w/i CC0.38860.29270.42640.54770.43320.32920.48240.5920
+ +Table 4: Results with different confidence estimation functions. + +
MethodICEWS14ICEWS0515
MRRHits@1Hits@3Hits@10MRRHits@1Hits@3Hits@10
ALRE-IR w/i M0.54010.42790.61160.71790.60180.48970.67770.7750
ALRE-IR w/i A0.41960.34050.46080.53880.46100.37390.50360.5969
+ +Table 5: Results with different score functions. + +# A.2 Detailed Analysis + +The results in this section are obtained on datasets ICEWS14 and ICEWS0515, with similar results on the ICEWS18 dataset. + +# A.2.1 Confidence estimation function + +In this paper, two rule confidence evaluation methods, similarity matching and concatenation combination, are proposed. In order to evaluate these two methods, we conduct experiments on ICEWS14 and ICEWS0515 datasets respectively, and the results are reported in Table 4. ALRE-IR $w / i$ SM and ALRE-IR $w / i$ CC represent models that employ similarity matching and concatenation combination to evaluate rule confidence, respectively. It can be seen from Table 4 that ALRE-IR $w / i$ CC performs significantly worse than ALRE-IR $w / i$ SM on both datasets. It indicates that similarity-based measures can better reflect causal association between paths and relations, so we adopt similarity matching on three datasets to evaluate the confidence of rules. + +# A.2.2 Score function + +When scoring a quadruple, the score function introduced in this paper takes highest confidence of all rules as the score of the quadruple, named ALRE-IR $w / i$ M. We also try another way, that is, averaging all path embeddings to get a global path embedding, and taking confidence of the rule composed of global paths and relations as the score of the quadruple, named ALRE-IR $w / i$ A. We conduct experiments on ICEWS14 and ICEWS0515 datasets, and the results are shown in Table 5. It is clearly to see that averaging all path embeddings works poorly, because not all paths contribute to the current event. + +# A.3 Case study + +Table 6 shows prediction results for two queries (Uhuru Muigai Kenyatta, Demand,?,t) and (South + +Korea, Sign formal agreement,?,t) on the ICEWS14 dataset. The table shows the paths from the subject entity to candidate object entities, the rules composed of relation paths, and the scores of the corresponding rules. ALRE-IR aims to find the target entity corresponding to the most reasonable rule. + +
QueryPath and ruleScoreTarget entity
(Uhuru Muigai Kenyatta, Demand,?,t)P: Citizen (Kenya) \(\frac{Demand\ meeting}{t-1}\)Uhuru Muigai Kenyatta0.2699Citizen (Kenya) (√)
R: Demand meeting\( \Rightarrow \)Demand\( ^{-1} \)
P: Uhuru Muigai Kenyatta \(\frac{\textit{Threaten}}{t-1}\)Citizen(Kenya)0.0597
R: Threaten\( \Rightarrow \)Demand
P: Police(Kenya) \(\frac{Demand}{t-1}\)Citizen(Kenya)
\(\frac{Demand\ meeting}{t-1}\)Uhuru Muigai Kenyatta0.0356Police(Kenya)
R: Demand|Demand meeting\( \Rightarrow \)Demand\( ^{-1} \)
P: William Ruto\(\frac{\textit{makeanappeal}}{t-2}\)Citizen (Kenya)
\(\frac{Demand\ meeting}{t-1}\)Uhuru Muigai Kenyatta-0.1009William Ruto
R: Make an appeal |Demand meeting\( \Rightarrow \)Demand\( ^{-1} \)
(South Korea, Sign formal agreement,?,t)P: South Korea \(\frac{\textit{Express intent to cooperate}}{t-1}\)China0.4960
R: Express intent to cooperate\( \Rightarrow \)Sign formal agreement
P: South Korea \(\frac{\textit{Express intent to cooperate economically}}{t-2}\)China
R: Express intent to cooperate economically\( \Rightarrow \)Sign formal agreement0.2850China(√)
P: South Korea \(\frac{\textit{Engage in negotiation}}{t-2}\)China0.1165
R: Engage in negotiation\( \Rightarrow \)Sign formal agreement
P: South Korea \(\frac{\textit{Express intent to provide economic aid}}{t-2}\)International Government Organizations0.3470International Government Organizations
R: Express intent to provide economic aid\( \Rightarrow \)Sign formal agreement
P: South Korea \(\frac{\textit{Express intent to provide humanitarian aid}}{t-2}\)Sierra Leone0.2140Sierra Leone
R: Express intent to provide humanitarian aid\( \Rightarrow \)Sign formal agreement
P: North Korea \(\frac{\textit{Occupy territory}}{t-1}\)South Korea0.0065North Korea
R: Occupv territory\( \Rightarrow \)Sign formal agreement\( ^{-1} \)
+ +Table 6: Entity prediction visualization on ICEWS14. Since the dataset does not provide accurate time, we use $t$ to denote the time of the event. \ No newline at end of file diff --git a/anadaptivelogicalruleembeddingmodelforinductivereasoningovertemporalknowledgegraphs/images.zip b/anadaptivelogicalruleembeddingmodelforinductivereasoningovertemporalknowledgegraphs/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..a75ea66c0431078594cbdfddd0cdb40f8cafc839 --- /dev/null +++ b/anadaptivelogicalruleembeddingmodelforinductivereasoningovertemporalknowledgegraphs/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:243e55a7ec53637e64c6f86b4716c73be403bb94d2b7202574bd91da8970812a +size 610980 diff --git a/anadaptivelogicalruleembeddingmodelforinductivereasoningovertemporalknowledgegraphs/layout.json b/anadaptivelogicalruleembeddingmodelforinductivereasoningovertemporalknowledgegraphs/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..c5d36b2bc993bf773f70dff7d48cabaa96675bc6 --- /dev/null +++ b/anadaptivelogicalruleembeddingmodelforinductivereasoningovertemporalknowledgegraphs/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a5013812d92efb374abb4b84ed751e03db6f4fd93daecb8ef34f7a90918cbde8 +size 429828 diff --git a/ananchorbasedrelativepositionembeddingmethodforcrossmodaltasks/a5a98401-de04-4d49-ad85-6a4beb969e77_content_list.json b/ananchorbasedrelativepositionembeddingmethodforcrossmodaltasks/a5a98401-de04-4d49-ad85-6a4beb969e77_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..749ac6e646cebd2fa6bb7193f34c517c60af14aa --- /dev/null +++ b/ananchorbasedrelativepositionembeddingmethodforcrossmodaltasks/a5a98401-de04-4d49-ad85-6a4beb969e77_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:626c63029ae6f50abe0fc338f341f1442ce8c47d476cb6eeceef1c8e2a673306 +size 98327 diff --git a/ananchorbasedrelativepositionembeddingmethodforcrossmodaltasks/a5a98401-de04-4d49-ad85-6a4beb969e77_model.json b/ananchorbasedrelativepositionembeddingmethodforcrossmodaltasks/a5a98401-de04-4d49-ad85-6a4beb969e77_model.json new file mode 100644 index 0000000000000000000000000000000000000000..9f1e2ab45846e124c1e8a688075e17f87f2f6935 --- /dev/null +++ b/ananchorbasedrelativepositionembeddingmethodforcrossmodaltasks/a5a98401-de04-4d49-ad85-6a4beb969e77_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:226f1298d8c23ab227f7f26fe243516dd5a91cde11c1505b9abf4e5246e63361 +size 120651 diff --git a/ananchorbasedrelativepositionembeddingmethodforcrossmodaltasks/a5a98401-de04-4d49-ad85-6a4beb969e77_origin.pdf b/ananchorbasedrelativepositionembeddingmethodforcrossmodaltasks/a5a98401-de04-4d49-ad85-6a4beb969e77_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..a0b0baaa288dca874399f31170a9a26712038ee6 --- /dev/null +++ b/ananchorbasedrelativepositionembeddingmethodforcrossmodaltasks/a5a98401-de04-4d49-ad85-6a4beb969e77_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e525df3622e996488063264319a1048e48c5c2975d6195997e621234ee61fb4c +size 10970151 diff --git a/ananchorbasedrelativepositionembeddingmethodforcrossmodaltasks/full.md b/ananchorbasedrelativepositionembeddingmethodforcrossmodaltasks/full.md new file mode 100644 index 0000000000000000000000000000000000000000..713b872dc31deace16270f03ede3d50e635b91be --- /dev/null +++ b/ananchorbasedrelativepositionembeddingmethodforcrossmodaltasks/full.md @@ -0,0 +1,439 @@ +# An Anchor-based Relative Position Embedding Method for Cross-Modal Tasks + +Ya Wang $^{1,*}$ , Xingwu Sun $^{1,2,*}$ , Fengzong Lian $^{1}$ , Zhanhui Kang $^{1}$ , Chengzhong Xu $^{2}$ Machine Learning Platform Department, Tencent $^{1}$ + +State Key Lab of IOTSC, Department of Computer Science, University of Macau2 + +connorywang@tencent.com, sammsun@tencent.com + +faxonlian@tencent.com, kegokang@tencent.com, czxu@um.edu.mo + +# Abstract + +Position Embedding (PE) is essential for transformer to capture the sequence ordering of input tokens. Despite its general effectiveness verified in Natural Language Processing (NLP) and Computer Vision (CV), its application in cross-modal tasks remains unexplored and suffers from two challenges: 1) the input text tokens and image patches are not aligned; 2) the encoding space of each modality is different, making it unavailable for feature comparison. In this paper, we propose a unified position embedding method for these problems, called AnChor-basEd Relative Position Embedding (ACE-RPE), in which we first introduce an anchor locating mechanism to bridge the semantic gap and locate anchors from different modalities. Then we conduct the distance calculation of each text token and image patch by computing their shortest paths from the located anchors. Last, we embed the anchor-based distance to guide the computation of cross-attention. In this way, it calculates cross-modal relative position embeddings for cross-modal transformer. Benefiting from ACE-RPE, our method obtains new SOTA results on a wide range of benchmarks, such as Image-Text Retrieval on MS-COCO and Flickr30K, Visual Entailment on SNLI-VE, Visual Reasoning on NLVR2 and Weakly-supervised Visual Grounding on RefCOCO+. + +# 1 Introduction + +Transformer (Vaswani et al., 2017) has shown excellent performance in Natural Language Processing (NLP), Computer Vision (CV) as well as cross-modal tasks, including natural language inference (Devlin et al., 2018), image classification (Wu et al., 2021), visual question answering (Wu et al., 2017) and visual entailment (Xie et al., 2019), etc. Nevertheless, transformer module lacks the capability to capture the ordering information of the input tokens + +because of the limitation of its self-attention mechanism. Therefore, incorporating explicit position representations is crucial to improve the performance of transformer-based models (Devlin et al., 2018; Dosovitskiy et al., 2020). + +Generally, there are two mainstream position encoding methods in transformer-based NLP and CV models, i.e., absolute position embedding (APE) and relative position embedding (RPE). APE methods (Vaswani et al., 2017; Devlin et al., 2018; Dosovitskiy et al., 2020) encode absolute positions of the input tokens with either trainable (Devlin et al., 2018) or fixed embedding (Vaswani et al., 2017). These position embeddings are added with the token embeddings, which are then passed to the self-attention layer to calculate the token relationship considering their positional information. It has been verified effective in a variety of NLP (Wang et al., 2020; Devlin et al., 2018) and CV (Wu et al., 2021) tasks. On the other hand, RPE methods (Chu et al., 2021; Shaw et al., 2018) encode the pairwise distances of every two tokens. Commonly, it directly interacts with the calculation of attention mechanism in different ways (Wu et al., 2021; Chu et al., 2021). Compared with APE, RPE methods are superior to modeling the positional information of extremely long or variant-length sequences. As a result, in some span prediction tasks of NLP, RPE methods are shown to achieve more performance gains than APE ones (Wang et al., 2020). + +Despite the success of the position embedding methods in unimodal tasks, its exploration in the field of cross-modal modeling is still limited. Recent works on cross-modal tasks (Cho et al., 2021; Li et al., 2021) could be classified into two frameworks, 1) One-stage methods (Fig. 1(a)) which extract the cross-modal representation with a unified cross-modal encoder; 2) Two-stage methods (Fig. 1(b)), which have additional text encoder and image encoder. Both of them adopt the position embeddings in a separate way, where the text and image + +position representations are embedded individually. In this way, the models can only learn position embedding in each modality separately while ignoring positional information between two tokens from different modalities. However, it is challenging to raise a unified method for cross-modal position embedding. Firstly, the inputs from two modalities are embedded into different spaces, making the input embedding not comparable. Secondly, since the text tokens and image patches are not aligned, the relative positions between two units from different modalities are meaningless. + +In this paper, we advocate a new perspective for effective cross-modal position encoding (shown in Fig. 1(c)), called AnChor-basEd Relative Position Embedding (ACE-RPE). It first computes alignment between text and image tokens to locate aligned pieces, which are called anchors in this paper. Subsequently, the token-to-token (t2t) and patch-to-patch (p2p) relative position is calculated for unimodal ordering information. The relative position searching of arbitrary text token and image patch is then considered as a shortest path problem, containing three steps: 1) routing from given token and its nearby anchors; 2) routing from anchors and their located image patches, and 3) routing from the located patches to the given image patch. As illustrated in Fig. 2, the relative position of "A" and the image patch of the man is derived from three terms: the t2t relative position between "A" and the anchor "man", the relative position from anchor "man" to the image patch matching "man", and the relative position from the located image patch to the patch of human (obviously, 0 in this case). Finally, we embed the anchor-based relative position to the self-attention calculation. Further, we conduct extensive experiments to verify the effectiveness of the proposed ACE-RPE compared to many strong baselines. The results demonstrate that our method can boost the performance of cross-modal transformers with a large margin. + +The main contributions of this work can be summarized as follows, + +- We propose the ACE-RPE method to incorporate positional information into cross-modal transformers and bridge the gap of different modalities. As we know, it is the first work to model relative position in cross-modal tasks. +- We give an anchor-based RPE method to get relative positions according to the located an + +chors between two modalities. Extensive experiments compared with strong baselines reveals the effectiveness of this method. + +- Our method achieves new SOTA in 5 cross-modal benchmarks, including Flickr30K (Plummer et al., 2015), MS-COCO (Lin et al., 2014), SNLI-VE (Xie et al., 2019), NLVR2 (Suhr et al., 2018) and RefCOCO+ (Yu et al., 2016). In addition, it also surpasses baseline methods significantly on VQA (Goyal et al., 2017). + +# 2 Related Work + +# 2.1 Position Embedding for NLP + +Currently, Transformer (Vaswani et al., 2017) plays a major role in the field of NLP. It shows superiority in many real-world tasks, such as natural language inference (Devlin et al., 2018) and question answering (Devlin et al., 2018; Rajpurkar et al., 2016). However, the self-attention of transformer lacks the ability to capture ordering information of input tokens in a sequence. Such that, additional explicit representations for token positions are crucial to the performance of the transformer. + +The position embedding in NLP could be categorized into two classes: APE and RPE. APE encodes the absolute position of tokens in a sequence. Each position has its individual embedding, which are generated with specific functions, like sinusoidal operator (Vaswani et al., 2017) or learnable encoding (Devlin et al., 2018). Usually, the generated APE is added with the input text tokens for an explicit perspective view of token positions. Therefore, the same token in different positions will have different embedding. Currently, various works on APE are proposed to further boost the performance of transformer-based methods. + +RPE (Dai et al., 2019; Devlin et al., 2018; Raffel et al., 2019) encodes the pairwise relative token position via interacting with the query, key or value in self-attention modules (Shaw et al., 2018). Compared to APE, RPE is translation-invariant and could encode variable lengths of input sequences. Therefore, it is shown to surpass APE on some long-sequence tasks (Wang et al., 2020). + +# 2.2 Position Embedding for CV + +With the great success of Visual Transformer (ViT) (Dosovitskiy et al., 2020) on large-scale dataset, the transformer-based methods have also become + +![](images/4adc4c3dff4cfdde036bb5f13635c4b66d66aebc47d723c919f54c242e85c44d.jpg) +(a) One-stage + +![](images/4b9f0283d228df97f615d7178f619928a44e1af167f28552290a002c6021d1f1.jpg) +(b) Two-stage + +![](images/8a0cfbc1d1e5fc5e2f6122b89db97af166205161c10409dbcd72b97cab5273f7.jpg) +(c) Ours + +![](images/9247609f21110d0f060aae934e219d7a83ee2d9d0d2243ccb8154c6b879f73b1.jpg) +Figure 1: Conceptual comparison of three position embedding methods. The output blocks in green, orange and blue present the [CLS] token, text and image embedding. (a) The One-stage method (Tsai et al., 2019), which has a unified cross-modal encoder. Only APE is utilized in this method. (b) Two-stage method (Li et al., 2021), containing extra text and image encoders. Both AFE and RPE are injected in the backbone, but they are embedded modality-separately. (c) Our ACE-RPE method. Except for unimodal AFE and RPE, ACE-RPE is proposed to leverage the cross-modal encoder with the relative position information from different modalities. + +![](images/3a204ba0ee46aaaf78e061731daecbe471a764a467f23d673efe5d7c415f587b.jpg) +Figure 2: A case from MS-COCO (Lin et al., 2014) to illustrate ACE-RPE. The proposed Anchor-based Relative Position is calculated with the located anchors (words in red) and t2t relative position. "M" is the masked relative position. + +an important paradigm in the area of CV (Dosovitskiy et al., 2020; Wu et al., 2021). Following the transformer-based methods in NLP tasks, the position embedding is also considered as a key component to obtain better performance on CV tasks. Though common RPE on images could outperform APE methods in some tasks (Dosovitskiy et al., 2020), it is demonstrated by some works (Dosovitskiy et al., 2020; Srinivas et al., 2021) that the superiority of RPE is not solid. To handle this issue, some follow-up works (Chu et al., 2021; Wu et al., 2021; Zhang and Yang, 2021) present significant improvement on RPE methods, which could overpass APE counterparts by more robust margins. + +In summary, position embedding has been proved to have a significant effect on the performance of transformer-based models in both NLP and CV. However, the exploration on cross-modal tasks is still vacant. One of the most important reasons is that it is challenging to find a meaningful "position" between different modalities. For example, it is not available for us to define the position of the word "are" in a text and the corresponding patches in an image. To this end, we propose an anchor-based method, which bridges the gap between the text and image modalities and makes it possible to calculate position embeddings of different modalities. + +# 3 Methods + +The overview of our backbone network is presented in Fig. 3, which contains a 6-layer visual transformer (Dosovitskiy et al., 2020) as the image encoder, a 6-layer linguistic transformer (Devlin et al., 2018) as the text encoder and a 6-layer cross-modal transformer. The AnChor-basEd Position Embedding (ACE-RPE) is proposed to leverage the cross-modal encoder with cross-modal positional information. It involves two key procedures: 1) learning the locating of cross-modal anchors; 2) ACE-RPE calculation by incorporating anchor locating and t2t/i2i relative position. In this section, we first present the above procedures in detail (Sec. 3.1 and Sec. 3.2). Then, we present the overall pre-training objectives of our method. + +![](images/b20dcc5e10e05a84ae417c6a5afdd7cc4559754adb01fbd7d711c14dd36171df.jpg) +Figure 3: The overall architecture of our ACE-RPE method. It contains a text encoder, an image encoder and an extra cross-modal encoder to extract cross-modal features. Firstly, it learns the cross-modal locating of anchors in an unsupervised manner. Then, the cross-modal position embedding is calculated by interacting with the input embedding of text tokens and image patches (detailed in the right part), which serves as the RPE of the following cross-modal encoder. The model pre-training follows four objectives: Image-Text Matching (ITM), Masked Language Modeling (MLM), Masked Image Modeling (MIM) and Anchor Loss. + +![](images/fb5317536d432c86e5223dbdd569112aaf62a7cdbf4b7f60ff3811ba2b15d1c2.jpg) + +# 3.1 Cross-modal Locating of Anchors + +Considering an image $x$ and its corresponding text $y$ , the "anchor" in this paper refers to the prominent tokens of $y$ , which can be located to some patches of $x$ . An illustration of cross-modal anchors is depicted in Fig. 2. Naturally, the word "man" is associated with the image patch containing the human, and "blue" can be located to the blue patches. Then, the words "tie" and "cat" are called anchors in this paper. + +In this part, we propose an unsupervised method to figure out the cross-modal anchors effectively. It uses a token-wise loss to search for anchors without any additional annotations. Formally, the raw image $x$ is segmented into $M + 1$ image patches (Dosovitskiy et al., 2020), i.e., $x = \{c_x, x_1, x_2, \dots, x_m, \dots, x_M\}$ , where each of them is embedded with a normalized $D$ -dimensional vectors, $c_x$ is an image [CLS] token. Similarly, the text $y$ is tokenized to $N + 1$ text tokens, $y = \{c_y, y_1, y_2, \dots, y_n, \dots, y_N\}$ , where $c_y$ is a text [CLS] token. The token-wise similarity between the image patch $x_m$ and text token $y_n$ is computed by a specific similarity function (cosine similarity in this paper) $f$ . We then introduce an + +anchor loss to maximize the similarity of the anchors and their matching image patches, without changing the similarity of unmatched pairs, e.g., "blue" and patches of the "horse" in Fig. 2. Accordingly, the proposed anchor loss is formulated based on contrastive learning and log-sum-exp trick1: + +$$ +\begin{array}{l} \mathcal {L} _ {a c e} = \frac {1}{2} \mathbb {E} _ {(x, y)} \left[ H _ {i 2 t} (x, \mathcal {O} _ {y}) + H _ {t 2 i} (y, \mathcal {O} _ {x}) \right. \\ \left. - \frac {1}{\lambda} \log \sum_ {m, n} e ^ {\lambda f \left(x _ {m}, y _ {n}\right)} \right] \tag {1} \\ \end{array} +$$ + +where $\lambda$ is a scale parameter. $\mathcal{O}_y$ and $\mathcal{O}_x$ indicate the dynamic dictionaries (He et al., 2020), containing one positive sample $y$ and $K - 1$ negative samples, that is only text $y$ in $\mathcal{O}_y$ matches image $x$ . $K$ is 65, 536 in this paper, following (Li et al., 2021). $f$ presents the similarity function (cosine similarity in this paper). $H_{i2t}(X,\mathcal{O}_y)$ and $H_{t2i}(Y,\mathcal{O}_x)$ denote the image-to-text and text-to-image con + +trastive losses based on K-pairs, respectively + +$$ +\begin{array}{l} H _ {i 2 t} (x, \mathcal {O} _ {y}) = - \min \left\{0, f (x, y) - \delta \right. \\ \left. - \frac {1}{\lambda} \log \sum_ {z \in \mathcal {O} _ {y}, z \neq y} e ^ {\lambda f (x, z)} \right\} \tag {2} \\ \end{array} +$$ + +here $\delta$ is the margin between positive and negative samples, which is empirically set to 0.05 in our experiments. $H_{t2i}(y,\mathcal{O}_x)$ is defined accordingly. + +# 3.2 Calculation of ACE-RPE + +The calculation of ACE-RPE refers to three major components: 1) the locating of anchors with multi-group relative position; 2) the computation of anchor-based cross-modal relative position between text tokens and image patches; 3) cross-modal relative position embedding. Each step is elaborated as follows. + +# 3.2.1 Locating of Anchors + +The relative position between anchors and their relative image patches is dynamically generated with a proposed multi-group cross-modal similarity, + +$$ +S _ {G} \left(x _ {m}, y _ {n}\right) = \left[ f \left(\hat {x} _ {m} ^ {1}, \hat {y} _ {n} ^ {1}\right), f \left(\hat {x} _ {m} ^ {2}, \hat {y} _ {n} ^ {2}\right), \dots , f \left(\hat {x} _ {m} ^ {G}, \hat {y} _ {n} ^ {G}\right) \right] \tag {3} +$$ + +where $G$ is the number of groups, $\hat{x}_m\in \mathbb{R}^{G\frac{D}{G}}$ $\hat{y}_n\in \mathbb{R}^{G\frac{D}{G}}$ are the reshaped versions of $x_{m}$ and $\hat{x}_m^j\in \mathbb{R}^{\frac{D}{G}},\hat{y}_n^j\in \mathbb{R}^{\frac{D}{G}}$ Note that our proposed multi-group cross-modal similarity is not a scalar but a vector of length $G$ + +Shown in Eqn. 3, the multi-group cross-modal similarity functions on all text tokens and image patches. We then introduce a post-locating for anchors with a soft shrinking operator, + +$$ +\widehat {S} _ {G} \left(x _ {m}, y _ {n}\right) = \left\{ \begin{array}{l l} S _ {G} \left(x _ {m}, y _ {n}\right), & S _ {G} \left(x _ {m}, y _ {n}\right) \geq \delta \\ \delta e ^ {\tau \left(S _ {G} \left(x _ {m}, y _ {n}\right) - \delta\right)}, & S _ {G} \left(x _ {m}, y _ {n}\right) < \delta \end{array} \right. \tag {4} +$$ + +where $\delta$ is a hyper-parameter. $\tau$ is a large enough scalar, set to $10^{4}$ is this paper. + +The set of anchors is then defined as + +$$ +\mathcal {A} _ {G} (x, y) = \left\{x _ {m} \mid \exists y _ {n}, s. t. \widehat {S} _ {G} \left(x _ {m}, y _ {n}\right) \geq \delta \right\} \tag {5} +$$ + +where $\geq$ is calculated element-wisely by each group of $\widehat{S}_G$ . Hence, the $\mathcal{A}_G(x,y)$ is a collection of $G$ anchor sets, which may be different in different groups. As indicated in Eqn. 8 and analyzed in Sec. A.2, the multi-group anchor sets instead of a single one can enhance the flexibility of position embeddings. + +Finally, the distance between anchors and their relative image patches is, + +$$ +D _ {G} (x _ {m}, y _ {n}) = \frac {1}{\widehat {S} _ {G} (x _ {m} , y _ {n})} \tag {6} +$$ + +# 3.2.2 Anchor-based Cross-modal Relative Position Calculation + +Given an arbitrary text token and an image patch, we consider the calculation of their relative position as a shortest path problem, where the path is split into three steps: 1) route from the given text token to nearby anchors; 2) route from anchors to their located image patches and 3) route from the located image patches to the given image patch. Formally, the anchor-based relative distance is, + +$$ +\begin{array}{l} P _ {a c e} \left(x _ {m}, y _ {n}\right) = \min _ {i, j} \left\{D _ {p 2 p} \left(x _ {m}, x _ {i}\right) \oplus \right. \tag {7} \\ \left. D _ {G} \left(x _ {i}, y _ {j}\right) \oplus D _ {t 2 t} \left(y _ {j}, y _ {n}\right) \right\} \\ \end{array} +$$ + +where “ $\oplus$ ” is the broadcasting addition of scalars and vectors. “ $min(\cdot)$ ” is executed in an inner-group manner, i.e., the values are compared in each group. Therefore, the output $P_{ace}(x_m, y_n)$ keeps a vector of length $G$ . Here $D_{p2p}$ and $D_{t2t}$ are the common image patch-to-patch and text token-to-token physical distance, respectively. For efficiency, we only consider neighborhood of $B_p$ tokens in $D_{t2t}$ and a square neighborhood of $B_t$ image paths in $D_{p2p}$ . It should be noted that, the matrix of all text tokens and image patches $P_{ace}(x, y)$ can be implemented efficiently by Pointwise Convolution (Howard et al., 2017), reducing the computation complexity to $O(MNB_pB_tG)$ , which can be omitted since $B_p, B_t$ and $G$ are small enough. + +# 3.2.3 Cross-modal Relative Position Embedding + +Sec. 3.2.2 provides the multi-group relative position of each text token and image patch. The pairwise anchor-based relative position is then embedded with a learnable matrix $W \in \mathbb{R}^{G \times D}$ , + +$$ +E _ {a c e} \left(x _ {m}, y _ {n}\right) = P _ {a c e} \left(x _ {m}, y _ {n}\right) W \tag {8} +$$ + +Which is called ACE-RPE in this paper. Obviously, the proposed ACE-RPE is a specific case of RPE, where the distance of the images and texts is calculated with an anchor strategy and represented by a $G$ -dimensional vector. Then, the distances are projected to learnable position embedding and the same distance enforces the same position embedding. Consequently, the t2t RPE in NLP, p2p RPE + +in CV and t2p/p2t RPE in cross-modal tasks are united in a unified form, as formulated in Eqn. 7. + +Detailedly presented in Sec. A.3, we propose two different cross-attention modes interacting with ACE-RPE, i.e., the bias mode and the contextual mode. By default, we use the contextual mode in this paper. + +# 3.3 Pre-training Objectives + +The pre-training of our models involves optimizing four objectives jointly, i.e., the proposed anchor loss for anchor locating, Masked Language Modeling (MLM) for text embedding, Masked Image Modeling (MIM) for image embedding, Image-Text Matching (ITM) for cross-modal matching, as shown in Fig. 3. + +Anchor Loss is optimized during pre-training for better anchor locating. Noted in Eqn. 1, it enhances the similarity of anchors and their matching image patches by token-wise contrastive learning, exclusively ignores unmatched pairs through log-sum-exp trick. + +Masked Language Modeling (MLM) predicts the masked words with both contextual text tokens and image patches. It aims to learn better text embedding by injecting extra contextual information in image patches. In this part, we conduct the MLM with a masking probability of $15\%$ and take the output text embedding of cross-encoder to predict the masked tokens. + +Masked Image Modeling (MIM) predicts raw pixel values of the randomly masked image patches by a lightweight one-layer head. Following (Xie et al., 2021), we implement this task by optimizing the $\ell_1$ loss between raw pixel values and the output of the prediction head. + +Image-Text Matching (ITM) is to predict whether an image-text pair is positive (matched) or negative (unmatched), and further capture the contextual correlation between vision and language. It is a binary classification task while taking the embedding of the [CLS] token as a joint representation of the image-text pair. + +# 4 Experiments + +In this section, we first provide numerical analyses of the proposed ACE-RPE method compared with widely used baselines on 5 cross-modal tasks, including 6 benchmarks. Then, we make a detailed ablation study to analyze the contribution of each component of the proposed ACE-RPE method. + +# 4.1 Pre-training Setup + +Pre-training Datasets Following ALBEF (Li et al., 2021), the pre-training datasets are constructed with four public-released datasets, including two web datasets (Conceptual Captions (Sharma et al., 2018), SBU Captions (Ordonez et al., 2011)), and two in-domain datasets (MS-COCO (Lin et al., 2014) and Visual Genome (Krishna et al., 2017)). The entire pre-training dataset contains about 4.0M unique images and 5.1M image-text pairs. + +Implementation Details Our ACE-RPE method contains 163.7M parameters, including a text encoder of 66.6M linguistic transformer (Devlin et al., 2018), an image encoder of 43.8M ViT-B/16 (Dosovitskiy et al., 2020) and a cross-modal encoder of 53.3M transformer (Devlin et al., 2018). It is notable that, the text encoder is constructed with the first 6 layers of the original $\mathrm{BERT}_{\mathrm{base}}$ . Presented in Fig. 3, the pre-trained objectives are composed of three tasks: Masked Language Modeling (MLM) (Li et al., 2021) for text embedding, Masked Image Modeling (MIM) (Xie et al., 2021) for image embedding (Li et al., 2021), and Image-Text Matching (ITM) for cross-modal modeling. Our model is pre-trained for 30 epochs with a batch size of 512 on 8 NVIDIA A100 GPUs. We use AdamW (Loshchilov and Hutter, 2017) setting the weight decay as 0.02. The initial learning rate is $10^{-4}$ and decayed to $10^{-6}$ , using a cosine schedule (Loshchilov and Hutter, 2016). We use RandAugment (Cubuk et al., 2020) as the image augmentation strategy, and then scale the augmented image to the resolution of $256 \times 256$ . We also utilize the momentum distillation proposed in ALBEF (Li et al., 2021) and the queue size is 65, 536. By default, the hyper-parameters are set as $B_{t} = 5$ , $B_{p} = 9$ , $\lambda = 2$ , $\delta = 0.05$ and $G = 8$ , respectively. + +# 4.2 Downstream Cross-modal Tasks + +We conduct comprehensive experimental comparison on 5 cross-modal tasks, including: 1) Image-Text Retrieval on MS-COCO (Lin et al., 2014) and Flickr30K (Plummer et al., 2015); 2) Visual Entailment on SNLI-VE (Xie et al., 2019); 3) Visual Reasoning on NLVR2 (Suhr et al., 2018); 4) Visual Question Answering on VQA (Goyal et al., 2017) and 5) Weakly-supervised Visual Grounding on RefCOCO+ (Yu et al., 2016). + +Image-Text Retrieval Image-Text Retrieval refers to retrieving the most relative images given a query text, and vice versa. We evaluate our methods on + +
Cross-modal Position EmbeddingPre-trained ImagesFlickr30K (1K test set)MS-COCO (5K test set)
TRIRTRIR
R@1R@5R@10R@1R@5R@10R@1R@5R@10R@1R@5R@10
None4M94.399.599.883.096.898.472.691.295.756.581.389.1
APE4M94.599.699.983.297.098.473.091.395.856.781.589.2
RPE4M94.499.599.983.297.198.473.291.495.956.781.789.3
APE + RPE4M94.599.699.983.397.298.573.291.596.056.981.889.3
Uniform†4M94.699.699.983.397.398.573.391.696.056.981.989.4
ACE-RPE4M95.299.699.983.597.398.673.992.096.557.682.090.1
ACE-RPE+Lace4M95.499.799.984.097.698.974.292.296.857.982.490.2
ACE-RPE+Lace14M*96.799.9100.087.097.899.178.995.297.761.485.391.0
+ +$\dagger$ : calculates the distance of all words and patches by a uniform distance without the guidance of "anchor". +*: extended with extra pre-training dataset CC12M (Changpinyo et al., 2021). + +Table 1: Comparison in the Image-Text Retrieval task on Flickr30K and MS-COCO. For text retrieval (TR) and image retrieval (IR), we report the Top-1 Recall (R@1), Top-5 Recall (R@5) and Top-10 Recal (R@10). The FLOPs of our ACE-RPE model is 122G, which has just $6.1\%$ computational overhead compared with "None" version (115G FLOPs). + +two benchmarks MS-COCO (Lin et al., 2014) and Flickr30K (Plummer et al., 2015). Following ALBEF (Li et al., 2021), the resolution of image crops is increased to $384 \times 384$ for more fine-grained retrieval. During finetuning, we employ ITM in Fig. 3 to predict whether the input images and texts are matched. + +Visual Entailment Visual Entailment is to predict the relationship of image-text pairs, i.e., entailment, neutral, or contradictory. The SNLI-VE (Xie et al., 2019) dataset is taken as our Visual Entailment benchmark. We follow UNITER (Chen et al., 2020a) and consider Visual Entailment as a three-way classification problem and predict the class probabilities using a multi-layer perceptron on the [CLS] token. + +Visual Reasoning The goal of Visual Reasoning is also to predict the relationship of the given texts and images. However, each input pair contains two images and one text, where the text is correlated with both of the images. The model should learn to identify the statement of the text for the given images is right or not. It is conducted on NLVR2 (Suhr et al., 2018) in this paper. + +Visual Question Answering Given an image, Visual Question Answering requires the model to predict the answer of a question. For fair comparison with ALBEF (Li et al., 2021), we consider this task as an answer generation task on the VQA (Goyal et al., 2017) benchmark. In detail, an additional 6-layer transformer is applied to generate the answer, while receiving the cross-modal embeddings through the cross-modal encoder in Fig. 3. + +Weakly-supervised Visual Grounding Visual Grounding (in RefCOCO+ (Yu et al., 2016)) is to localize the region of an image that corresponding to a given textual description. We follow a + +weakly-supervised setting (Li et al., 2021), where the model is finetuned with the same strategy as image-text retrieval task, and outputs the heatmaps by Grad-CAM (Selvaraju et al., 2017). + +# 4.3 Comparison with Baseline Methods + +In this part, we conduct 4 downstream cross-modal tasks (except for RefCOCO+) to compare the proposed ACE-RPE with the baseline methods, including 1) APE method (Dosovitskiy et al., 2020); 2) RPE method (Dosovitskiy et al., 2020); 3) a unified method combining APE and RPE (Wu et al., 2021). It is remarkable that among all methods, our ACE-RPE is the only cross-modal position embedding. The mentioned APE, RPE and their combined version are all conducted for each modality separately. They are simply concatenated together, and then injected into the cross-modal encoder. Furthermore, we also conduct a uniformed version of our ACE-RPE, where the distances of all words and patches are naively calculated by a uniform distance without the guidance of "anchor". + +
Cross-modal Position EmbeddingVQADevstdSNLI-DEVtestNLVR devtest
None73.273.679.279.579.980.5
APE73.974.180.280.780.681.0
RPE73.873.979.479.680.380.7
APE+RPE73.974.180.180.980.580.8
Uniform†74.174.280.381.080.580.9
ACE-RPE74.975.181.181.481.381.7
ACE-RPE + Lace75.475.781.482.081.781.9
ACE-RPE + Lace*76.876.982.082.583.183.6
+ +$\dagger$ : calculates the distance of all words and patches by a uniform distance without the guidance of "anchor". +*: pretrained on CC12M (Changpinyo et al., 2021). + +Table 2: Evaluation of the proposed methods on VQA (Goyal et al., 2017), Visual Entailment (SNLI-VE (Xie et al., 2019)) and Visual Reasoning (NLVR (Suhr et al., 2018)) tasks. "dev" and "std" in VQA are the test-dev and test-standard datasets. + +Numerical results are presented in Table 1 and + +
MethodsPre-trained ImagesFlickr30K (1K test set)MS-COCO (5K test set)
TRIRTRIR
R@1R@5R@10R@1R@5R@10R@1R@5R@10R@1R@5R@10
UNITER4M87.398.099.275.694.196.865.788.693.852.979.988.0
VILLA4M87.997.598.876.394.296.8------
OSCAR4M------70.091.195.554.080.888.5
ALIGN1.2B95.399.8100.084.997.498.677.093.596.959.983.389.8
ALBEF4M94.399.499.882.896.798.473.191.496.056.881.589.2
ALBEF14M95.999.8100.085.697.598.977.694.397.260.784.390.5
Ours4M95.499.799.984.097.698.974.292.296.857.982.490.2
Ours14M96.799.9100.087.097.899.178.995.297.761.485.391.0
+ +Table 2. It is shown that, in the task of Image-Text Retrieval (Table 1), our proposed ACE-RPE could enhance the performance of backbones by large margins. Specifically, compared with baseline cross-modal position embedding, i.e., None position embedding counterparts, our methods improve the performance over $1.1\%$ and $1.0\%$ R@1 in the "TR" and "IR" on Flickr30K. Similar gains in "TR" and "IR" on MS-COCO are up to $1.6\%$ and $1.4\%$ . It is worth noting that, these gains are achieved with the same backbone networks and same pre-training dataset. Meanwhile, while trained on a larger dataset with 14M samples, our model achieves two new SOTA performances on both Flickr30K and MS-COCO. + +Table 3: Experimental results of Image-Text Retrieval on Flickr30K and MS-COCO. + +
MethodVQADevstdSNLI-VEdevtestNLVDRVtest
VisualBERT (Li et al., 2019)70.871.0--67.467.0
VL-BERT (Su et al., 2020)71.2-----
LXMERT (Tan and Bansal, 2019)72.472.5--74.974.5
12-in-1 (Lu et al., 2020)73.2--77.0-78.9
UNTER (Chen et al., 2020b)72.772.978.678.377.277.9
VL-BART/T5 (Cho et al., 2021)-71.3---73.6
ViLT (Kim et al., 2021)70.9---75.276.2
OSCAR (Li et al., 2020)73.273.4--78.178.4
VILLA (Gan et al., 2020)73.673.779.479.078.479.3
ALBEF (Li et al., 2021) (4M)74.574.780.180.380.280.5
ALBEF (Li et al., 2021) (14M)75.876.080.880.982.683.1
ACE-RPE(4M)74.975.181.181.481.381.7
ACE-RPE + Lace (4M)75.475.781.482.081.781.9
ACE-RPE + Lace (14M)76.876.982.082.583.183.6
+ +For the tasks of Visual Question Answering on VQA, Visual Entailment on SNLI-VE and Visual Reasoning on NLVR, the proposed ACE-RPE also outperforms baseline methods robustly, as shown in Table 2. Furthermore, the comparison between "ACE-RPE" and "ACE-RPE + $\mathcal{L}_{ace}$ " reveals that the proposed $\mathcal{L}_{ace}$ is key for the performance improvement of ACE-RPE. + +# 4.4 Comparison with SOTA Methods + +Table 3, Table 4 and Table 5 report the results of the proposed ACE-RPE and previous SOTA methods. Pretrained on the dataset with 4M images, + +our methods achieve absolute improvements over ALBEF of $1.1\%$ R@1 in "TR" and $1.2\%$ R@1 in "IR" on Flickr30K. Similar gains in R@1 "TR" and "IR" on MS-COCO are up to $1.1\%$ and $1.1\%$ . For Visual Entailment, Visual Reasoning and Weakly-supervised Visual Grounding tasks, ACE-RPE also outperforms existing methods by substantial margins. With the 14M pre-trained dataset, which is also used in ALBEF, our method achieves 5 new SOTA results on all benchmarks1, which presents the superiority and robustness of our ACE-RPE. + +Table 4: Comparison with SOTA works on VQA, SNLI-VE and NLVR benchmarks. "dev" and "std" in VQA are the test-dev and test-standard datasets. + +
MethodValTestATestB
ARN (Liu et al., 2019)32.834.432.1
CCL (Zhang et al., 2020)34.336.933.6
ALBEF (Li et al., 2021)58.565.946.3
ACE-RPE(4M)59.466.647.1
ACE-RPE + Lace (4M)60.167.547.9
ACE-RPE + Lace (14M)60.567.948.2
+ +Table 5: Weakly-supervised visual grounding on RefCOCO+ benchmark. + +# 4.5 Visualization of ACE-RPE + +In order to reveal the inherent ability of the proposed ACE-RPE to model the cross-modal positional information, we provide Grad-CAM visualization (Selvaraju et al., 2017; Li et al., 2021) of the anchor-based relative position in the last cross-modal transformer. Fig. 4 shows some examples in MS-COCO. The visualization of cross-modal locating is highly correlated with human priors, which indicates the correctness of our ACE-RPE. + +# 5 Conclusion + +In this paper, we present a cross-modal position embedding method, called ACE-RPE, in which we first utilize an anchor locating method to learn to match the text words and the image patches. + +![](images/5ac27d7e4ce5903fa8097e6976e5274c704a75bd00a41192e7ee2313e0703225.jpg) +A + +![](images/2e8aeb054a6c89335f6190dc78eb2cea2d2ba15b02a61e817e828985c44fb329.jpg) +man + +![](images/2330df293414273d0dc40d91b796a03a4468dd459f712c090cf3b9acf700c36d.jpg) +in + +![](images/181d2bba5465e38a2da20351bb862880ed9e33510b3121bbb2eff54956470970.jpg) +b + +![](images/8ba34db25ed6001fe0e85d8a2b2cb9b61e10d4b20e00be11d44ef01abe27cb91.jpg) +is + +![](images/96e2e66f62f5f9369caf87398ae212cd2c1378524d7a69b717bb7a7ca938e7a4.jpg) +sitting + +![](images/bd46c434dd276798534c38b8f06b768618f22956f4378f7ae2c78e069abd546c.jpg) +on + +![](images/1dc732a7c2fd00612ba05d2f1ecb4b4dc25c82d3d312a98592eff766ab16e31c.jpg) +a + +![](images/e22ffd4a79978e6d5d899811211a7385eb84af69e9c424d74d723e245295a038.jpg) +fake + +![](images/c4c6515b24b36815f76ad83a5b33afaf98824ccbb3d2bd7dfa5851722d47f249.jpg) +horse + +the man is reading a newspaper while carrying a umbrella. + +![](images/84165f0dc59e46c6ac6330a2ab3e382093eb211d650b1fef87c780be1ee4ee41.jpg) +The + +![](images/a3d847d069af20d7b9a9dbf3bc8b2169465fe4c52cfe78042c0cd05f3d94f5af.jpg) +ma + +![](images/f689f7e87f730088221d89f008f5a28e00f84854706541ed9e959f1b0cbc39da.jpg) +is + +![](images/ab26fc0f77e55df6fd3895f1f25bf730aa31afb94fdbd93c41c8b543fdd8427b.jpg) +reading + +![](images/2274f5f4d23d2c23339e3f2730c42f691c8010b7cd37c2f302b23bb1bee99652.jpg) +a + +![](images/ddff02006dce7945921ba09f5ccf62fd825d0cc8c7bb9ac845a30abc9871f569.jpg) +newspaper +Figure 4: The Grad-CAM (Selvaraju et al., 2017) visualization of cross-modal distance on the last cross-attention layer. The words in red are the anchors. + +![](images/685052326bf1cc5198724bc26183ab321a481e3bdf5fde9525622e15ce2d8aeb.jpg) +while + +![](images/cd47c338e1bd2f9ad33020129cbc08348a9e5c77cfdbf2b832f2065d834a8077.jpg) +carrying + +![](images/c8b00d8c7d5eea8b48f903e927d6d62853d36aee68a3e73b810370a49a3a792e.jpg) +an + +![](images/c8b38c880520bd9348497f3279396607668177e598cfe87ac9e26b8c22a68017.jpg) +umbrella + +Then, we compute physical distances between anchors and tokens from different modalities, which are applied for cross-modal fusion. We conduct comprehensive experiments to analyze the effectiveness of different components of ACE-RPE as well as the performance under different modes and hyper-parameter settings. As we know, this work is the first to present position embeddings for cross-modal tasks, and the experimental results also demonstrate the superiority of our method. + +# Limitations + +Though the proposed ACE-RPE method achieves significant and substantial performance on 6 benchmarks. However, it has two major limitations: 1) the ACE-RPE is injected into backbone model during both pretraining and finetuning procedures. As we know, pretraining is much more time-consuming than finetuning. It will be more efficient to be implemented if it can maintain comparable results by simply initializing our models with a public released pretrained model, and only finetuning our models in downstream tasks. That is to say, the ACE-RPE is only employed in the finetuning model. We think it is worthy of more experimental results to study this kind of implementation. 2) The experiments in this paper are conducted on 8 NVIDIA A100 GPUs, which is expensive for personal researchers. + +# References + +Soravit Changpinyo, Piyush Sharma, Nan Ding, and Radu Soricut. 2021. Conceptual $12\mathrm{m}$ : Pushing web-scale image-text pre-training to recognize long-tail visual concepts. In Proceedings of the IEEE/CVF + +Conference on Computer Vision and Pattern Recognition, pages 3558-3568. +Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. 2020a. Uniter: Universal image-text representation learning. In European conference on computer vision, pages 104-120. Springer. +Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. 2020b. UNITER: universal image-text representation learning. In ECCV, volume 12375, pages 104-120. +Jaemin Cho, Jie Lei, Hao Tan, and Mohit Bansal. 2021. Unifying vision-and-language tasks via text generation. arXiv preprint arXiv:2102.02779. +Xiangxiang Chu, Zhi Tian, Bo Zhang, Xinlong Wang, Xiaolin Wei, Huaxia Xia, and Chunhua Shen. 2021. Conditional positional encodings for vision transformers. arXiv preprint arXiv:2102.10882. +Ekin D Cubuk, Barret Zoph, Jonathon Shlens, and Quoc V Le. 2020. Randaugment: Practical automated data augmentation with a reduced search space. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 702-703. +Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V Le, and Ruslan Salakhutdinov. 2019. Transformer-xl: Attentive language models beyond a fixed-length context. arXiv preprint arXiv:1901.02860. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. +Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929. +Zhe Gan, Yen-Chun Chen, Linjie Li, Chen Zhu, Yu Cheng, and Jingjing Liu. 2020. Large-scale adversarial training for vision-and-language representation learning. In NeurIPS. +Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6904-6913. +Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. 2020. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9729-9738. + +Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. 2017. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861. +Wonjae Kim, Bokyung Son, and Ildoo Kim. 2021. Vilt: Vision-and-language transformer without convolution or region supervision. arXiv preprint arXiv:2102.03334. +Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. 2017. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International journal of computer vision, 123(1):32-73. +Junnan Li, Ramprasaath Selvaraju, Akhilesh Gotmare, Shafiq Joty, Caiming Xiong, and Steven Chu Hong Hoi. 2021. Align before fuse: Vision and language representation learning with momentum distillation. Advances in Neural Information Processing Systems, 34. +Lianian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. 2019. Visualbert: A simple and performant baseline for vision and language. arXiv preprint arXiv:1908.03557, abs/1908.03557. +Xiujun Li, Xi Yin, Chunyuan Li, Pengchuan Zhang, Xiaowei Hu, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, Yejin Choi, and Jianfeng Gao. 2020. Oscar: Object-semantics aligned pre-training for vision-language tasks. In ECCV, pages 121-137. +Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740-755. Springer. +Xuejing Liu, Liang Li, Shuhui Wang, Zheng-Jun Zha, Dechao Meng, and Qingming Huang. 2019. Adaptive reconstruction network for weakly supervised referring expression grounding. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2611-2620. +Ilya Loshchilov and Frank Hutter. 2016. Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983. +Ilya Loshchilov and Frank Hutter. 2017. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101. +Jiasen Lu, Vedanuj Goswami, Marcus Rohrbach, Devi Parikh, and Stefan Lee. 2020. 12-in-1: Multi-task vision and language representation learning. In CVPR, pages 10434-10443. +Frank Nielsen and Ke Sun. 2016. Guaranteed bounds on information-theoretic measures of univariate mixtures using piecewise log-sum-exp inequalities. Entropy, 18(12):442. + +Vicente Ordonez, Girish Kulkarni, and Tamara Berg. 2011. Im2text: Describing images using 1 million captioned photographs. Advances in neural information processing systems, 24. +Bryan A Plummer, Liwei Wang, Chris M Cervantes, Juan C Caicedo, Julia Hockenmaier, and Svetlana Lazebnik. 2015. Flickr30k entities: Collecting region-to-phrase correspondences for richer image-to-sentence models. In Proceedings of the IEEE international conference on computer vision, pages 2641-2649. +Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683. +Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250. +Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. 2017. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision, pages 618-626. +Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. 2018. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2556-2565. +Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position representations. arXiv preprint arXiv:1803.02155. +Aravind Srinivas, Tsung-Yi Lin, Niki Parmar, Jonathon Shlens, Pieter Abbeel, and Ashish Vaswani. 2021. Bottleneck transformers for visual recognition. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 16519-16529. +Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, and Jifeng Dai. 2020. Vl-bert: Pre-training of generic visual-linguistic representations. In ICLR. +Alane Suhr, Stephanie Zhou, Ally Zhang, Iris Zhang, Huajun Bai, and Yoav Artzi. 2018. A corpus for reasoning about natural language grounded in photographs. arXiv preprint arXiv:1811.00491. +Hao Tan and Mohit Bansal. 2019. Lxmert: Learning cross-modality encoder representations from transformers. arXiv preprint arXiv:1908.07490. +Yao-Hung Hubert Tsai, Shaojie Bai, Paul Pu Liang, J Zico Kolter, Louis-Philippe Morency, and Ruslan Salakhutdinov. 2019. Multimodal transformer for + +unaligned multimodal language sequences. In Proceedings of the conference. Association for Computational Linguistics. Meeting, volume 2019, page 6558. NIH Public Access. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30. +Benyou Wang, Lifeng Shang, Christina Lioma, Xin Jiang, Hao Yang, Qun Liu, and Jakob Grue Simonsen. 2020. On position embeddings in bert. In International Conference on Learning Representations. +Kan Wu, Houwen Peng, Minghao Chen, Jianlong Fu, and Hongyang Chao. 2021. Rethinking and improving relative position encoding for vision transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 10033-10041. +Qi Wu, Damien Teney, Peng Wang, Chunhua Shen, Anthony Dick, and Anton van den Hengel. 2017. Visual question answering: A survey of methods and datasets. Computer Vision and Image Understanding, 163:21-40. +Ning Xie, Farley Lai, Derek Doran, and Asim Kadav. 2019. Visual entailment: A novel task for fine-grained image understanding. arXiv preprint arXiv:1901.06706. +Zhenda Xie, Zheng Zhang, Yue Cao, Yutong Lin, Jianmin Bao, Zhuliang Yao, Qi Dai, and Han Hu. 2021. Simmim: A simple framework for masked image modeling. arXiv preprint arXiv:2111.09886. +Licheng Yu, Patrick Poirson, Shan Yang, Alexander C Berg, and Tamara L Berg. 2016. Modeling context in referring expressions. In European Conference on Computer Vision, pages 69-85. Springer. +Qinglong Zhang and Yu-Bin Yang. 2021. Rest: An efficient transformer for visual recognition. Advances in Neural Information Processing Systems, 34. +Zhu Zhang, Zhou Zhao, Zhijie Lin, Xiuqiang He, et al. 2020. Counterfactual contrastive learning for weakly-supervised vision-language grounding. Advances in Neural Information Processing Systems, 33:18123-18134. + +# A Appendices + +# A.1 Shared V.S. Unshared + +ACE-RPE could also be used in a shared mode for fewer parameters. In this part, we conduct experiments with shared ACE-RPE and compared the results with the unshared version. Table 6 shows that shared ACE-RPE would result in a slight performance drop on Image-Text Retrieval and Visual Reasoning task. + +
ModeFlickr30KMS-COCONLVR
TRIRTRIRdevtest
Shared98.193.287.476.781.281.5
Unshared98.393.587.776.881.781.9
+ +Table 6: Ablation study on Image-Text Retrieval and Visual Reasoning task. The average recall on the test set is reported on Flickr30K and MS-COCO. + +# A.2 Robustness on Hyper-parameters + +The default hyper-parameters of the proposed method are: $\lambda = 2$ , $\delta = 0.05$ and $G = 8$ . Table 7 presents the performance comparison of different choice of these hyper-parameters. Anchor loss with larger $\lambda$ (Eqn. 1) forces the model to learn more about the most similar anchor, while smaller ones reduce to predict more possible anchors. $\delta$ serves as the threshold parameter to select the anchors, and $G$ is the number of groups in the proposed multi-head distance. It is shown that, $\lambda$ and $G$ influence the performance more significantly compared with $\delta$ . It is also indicated that as $G$ is greater than 8, the performance of ACE-RPE maintains almost unchanged. + +
MS-COCOλδG
12340.010.050.10.214816
TR85.987.787.687.587.587.787.587.687.187.687.787.7
IR75.076.876.676.676.576.876.776.776.376.676.876.9
+ +Table 7: Ablation study on Image-Text Retrieval task on MS-COCO. The average recall on the test set is reported. + +# A.3 Bias V.S. Contextual Modes + +ACE-RPE presents the position embedding of each text word and image patch. In this part, we propose two different cross-attention modes interacting with ACE-RPE, i.e., the bias mode and the contextual mode. + +Bias Mode In this mode, ACE-RPE has no explicit interaction with the query, key or value in the transformer block. Instead, it functions as the bias of the cross-attention block. Formally, + +$$ +\left\{ \begin{array}{l} \mathcal {F} _ {i 2 t} (x, y) = \frac {(x W ^ {Q}) (y W ^ {K}) ^ {T} + E _ {a c e} (x , y) W _ {E}}{\sqrt {D}} \\ \mathcal {F} _ {t 2 i} (y, x) = \frac {(y W ^ {Q}) (x W ^ {K}) ^ {T} + E _ {a c e} (x , y) W _ {E}}{\sqrt {D}} \end{array} \right. \tag {9} +$$ + +where $\mathcal{F}_{i2t}$ and $\mathcal{F}_{t2i}$ are the image-to-text and text-to-image cross-attention, respectively. $E_{ace}(x,y)\in \mathbb{R}^{M\times N\times D}$ is a 3-dimensional tensor, denoting the ACE-RPE between all text tokens and image patches. $W^{Q}$ and $W^K$ are learnable matrices. $W_{E}\in \mathbb{R}^{D}$ is a learnable vector, which maps $E_{ace}(x,y)$ into a 2-dimensional matrix. + +Contextual Mode ACE-RPE in contextual mode is first flatten into 2-dimension by average pooling, then added with the token/patch embedding, + +$$ +\left\{ \begin{array}{l} \bar {x} _ {i} = x _ {i} + \mathbb {E} _ {j = 1} ^ {N} E _ {\text {a c e}} \left(x _ {i}, y _ {j}\right) \\ \bar {y} _ {i} = y _ {i} + \mathbb {E} _ {i = 1} ^ {M} E _ {\text {a c e}} \left(x _ {i}, y _ {j}\right) \end{array} \right. \tag {10} +$$ + +The cross-attention is then, + +$$ +\left\{ \begin{array}{l} \mathcal {F} _ {i 2 t} (\bar {x}, \bar {y}) = \frac {(\bar {x} W ^ {Q}) (\bar {y} W ^ {K}) ^ {T}}{\sqrt {D}} \\ \mathcal {F} _ {t 2 i} (\bar {y}, \bar {x}) = \frac {(\bar {y} W ^ {Q}) (\bar {x} W ^ {K}) ^ {T}}{\sqrt {D}} \end{array} \right. \tag {11} +$$ + +In this case, ACE-RPE interacts with the queries, keys in a cross-attention block. Besides, it can also be applied to value embeddings, + +$$ +\left\{ \begin{array}{l} Z _ {i 2 t} (\bar {x}, \bar {y}) = \sigma \left(\mathcal {F} _ {i 2 t} (\bar {x}, \bar {y})\right) \left(\bar {y} W ^ {V} + E _ {a c e}\right) ^ {T} \\ Z _ {t 2 i} (\bar {y}, \bar {x}) = \sigma \left(\mathcal {F} _ {t 2 i} (\bar {y}, \bar {x})\right) \left(\bar {x} W ^ {V} + E _ {a c e}\right) ^ {T} \end{array} \right. \tag {12} +$$ + +Here, $\sigma (\cdot)$ presents the softmax function, and $W^{V}$ is a learnable matrix. $E_{ace}$ is $E_{ace}(x,y)$ for short. + +Experimental Result In this part, we compare the performances of two cross-modal modes, i.e., "Bias" and "Contextual" modes. Table 8 illustrates the numerical results in Image-Text Retrieval and Visual Reasoning task. Using the proposed ACE-RPE in contextual mode is demonstrated to be a better way. + +
ModeFlickr30KMS-COCONLVR
TRIRTRIRdevtest
Bias98.193.487.476.681.581.6
Contextual98.393.587.776.881.781.9
+ +Table 8: Ablation study on Image-Text Retrieval and Visual Reasoning task. The average recall on the test set is reported on Flickr30K and MS-COCO. + +# A.4 Component-wise Analysis + +Inspired by (Wu et al., 2021), in the field of image processing, the position embedding interacts with the calculation of the query, key and value in the self-attention layer. Accordingly, we analyze the result of each choice in cross-modal modeling, and the results are shown in Table 9. It is shown + +that ACE-RPE calculated on values could only get slight gains over the version without ACE-RPE, but the ones embedded on queries and values would result in significant performance gains. + +
PositionFlickr30KMS-COCONLVR
querykeyvalueTRIRTRIRdevtest
×××97.892.786.575.679.980.5
××98.193.387.476.681.481.6
××98.193.287.576.781.481.5
××97.892.886.776.080.781.0
×98.293.387.576.681.681.8
98.393.587.776.881.781.9
+ +Table 9: Ablation study on Image-Text Retrieval and Visual Reasoning. The average recall on the test set is reported on Flickr30K and MS-COCO. \ No newline at end of file diff --git a/ananchorbasedrelativepositionembeddingmethodforcrossmodaltasks/images.zip b/ananchorbasedrelativepositionembeddingmethodforcrossmodaltasks/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..7c70ca75b8fb891b13d5519fd6a68c9de492a9c2 --- /dev/null +++ b/ananchorbasedrelativepositionembeddingmethodforcrossmodaltasks/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bbf5ad9a93799dac57214502e20f2570f904f0885c5b3795622dc677c26f43d5 +size 751223 diff --git a/ananchorbasedrelativepositionembeddingmethodforcrossmodaltasks/layout.json b/ananchorbasedrelativepositionembeddingmethodforcrossmodaltasks/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..b6f485306b2649852aaab7d4b388181a54f260bd --- /dev/null +++ b/ananchorbasedrelativepositionembeddingmethodforcrossmodaltasks/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:90d3a5a491f71b54b9e08b7a015081bd764def1cfd5c3ad663fd4a487a065e14 +size 504967 diff --git a/asecondwaveofudhebrewtreebankingandcrossdomainparsing/6192ec5d-57a8-46b0-833e-34d351a4afc3_content_list.json b/asecondwaveofudhebrewtreebankingandcrossdomainparsing/6192ec5d-57a8-46b0-833e-34d351a4afc3_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..98b3beedd0aecdf4f8bcd4d4ae505bf9c29fb189 --- /dev/null +++ b/asecondwaveofudhebrewtreebankingandcrossdomainparsing/6192ec5d-57a8-46b0-833e-34d351a4afc3_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:46dd77f695a5f3d9abbd751c6bbdc33028ecdd8f68af373a3da67bbe6c2de440 +size 94644 diff --git a/asecondwaveofudhebrewtreebankingandcrossdomainparsing/6192ec5d-57a8-46b0-833e-34d351a4afc3_model.json b/asecondwaveofudhebrewtreebankingandcrossdomainparsing/6192ec5d-57a8-46b0-833e-34d351a4afc3_model.json new file mode 100644 index 0000000000000000000000000000000000000000..5d1115bf78246ce05c209e25274c3692d9a986cf --- /dev/null +++ b/asecondwaveofudhebrewtreebankingandcrossdomainparsing/6192ec5d-57a8-46b0-833e-34d351a4afc3_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2e12609a58f4ae86cd11cbe01b641fa3d8f94e5d8ffa60ad1ebd38281ddf18d0 +size 109352 diff --git a/asecondwaveofudhebrewtreebankingandcrossdomainparsing/6192ec5d-57a8-46b0-833e-34d351a4afc3_origin.pdf b/asecondwaveofudhebrewtreebankingandcrossdomainparsing/6192ec5d-57a8-46b0-833e-34d351a4afc3_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..7cfb94cb274e0ed4f38367cf025dab854a5e82a9 --- /dev/null +++ b/asecondwaveofudhebrewtreebankingandcrossdomainparsing/6192ec5d-57a8-46b0-833e-34d351a4afc3_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f3be87802b27a8433d3186647f98c0cf819fe9d4d2b8e5766cd34872b882148d +size 366010 diff --git a/asecondwaveofudhebrewtreebankingandcrossdomainparsing/full.md b/asecondwaveofudhebrewtreebankingandcrossdomainparsing/full.md new file mode 100644 index 0000000000000000000000000000000000000000..b770360064eb5950f1a2534ab122c3474f685f51 --- /dev/null +++ b/asecondwaveofudhebrewtreebankingandcrossdomainparsing/full.md @@ -0,0 +1,342 @@ +# A Second Wave of UD Hebrew Treebanking and Cross-Domain Parsing + +Amir Zeldes + +Georgetown University + +amir.zeldes@georgetown.edu + +Noam Ordan + +IAHLT + +noam.ordan@gmail.com + +Nick Howell + +IAHLT + +nIhowell@gmail.com + +Yifat Ben Moshe + +IAHLT + +yifat1811@gmail.com + +# Abstract + +Foundational Hebrew NLP tasks such as segmentation, tagging and parsing, have relied to date on various versions of the Hebrew Treebank (HTB, Sima'an et al. 2001). However, the data in HTB, a single-source newswire corpus, is now over 30 years old, and does not cover many aspects of contemporary Hebrew on the web. This paper presents a new, freely available UD treebank of Hebrew stratified from a range of topics selected from Hebrew Wikipedia. In addition to introducing the corpus and evaluating the quality of its annotations, we deploy automatic validation tools based on grew (Guillaume, 2021), and conduct the first cross-domain parsing experiments in Hebrew. We obtain new state-of-the-art (SOTA) results on UD NLP tasks, using a combination of the latest language modelling and some incremental improvements to existing transformer based approaches. We also release a new version of the UD HTB matching annotation scheme updates from our new corpus. + +# 1 Introduction + +Treebanks (TBs) form a fundamental resource for NLP and computational linguistics research: they provide high quality annotated data for tokenization, sentence splitting, POS tagging and syntactic/semantic relation extraction. For Morphologically Rich Languages (MRLs, Seddah et al., 2014), high quality morphosyntactically annotated data is particularly crucial, since basic search on a word-matching level cannot work before morphological segmentation has occurred. In languages such as Hebrew, where vowels are not well represented in the script, the need for reliable morphosyntactic disambiguation is particularly strong, since string-level ambiguity is very frequent and high. This is demonstrated by the following frequently cited example from Adler and Elhadad (2006), which has a large number of possible analyses for just a four character sequence (note Hebrew is right-to-left). + +(1) + +$\langle \mathrm{b.cl.m}\rangle$ be.cil.am - in shadow.their + +$\langle \mathrm{b.clm}\rangle$ (be./b.a.)celem - in.(a/the).image + +$\langle \mathrm{b.clm}\rangle$ (be./b.a.)calam - in.(a/the).photographer + +(bcl.m) bcal.am - onion.their + +$\langle \mathrm{bclm}\rangle$ becelem-Betzelem(organization) + +When such sequences contain multiple sub-tokens (e.g. 'in' and 'the' within 'in.the(image') we follow Universal Dependencies1 terminology in referring to the larger unit as a Multi-Word Token (MWT), and the sub-parts, each of which carries a separate part-of-speech, as tokens. + +Although a treebank for Hebrew was already created by the MILA Institute (Sima'an et al. 2001, hence HTB), subsequently converted to dependencies (Goldberg and Elhadad, 2009) and finally to the more recent Universal Dependencies framework (see Sade et al. 2018, hence UD-HTB), two major concerns motivate the current work in creating a new UD treebank for Hebrew. The first is the age of the data: HTB texts were taken from 1990-91 issues of a single newspaper, Ha'aretz, as illustrated in (2), describing the introduction of computers to an office. + +(2) ba-axrona huxnesu ma'arexet maxshev veto xnot le-kol ha'anafim + +'Recently a computer system and software were introduced into all branches' + +This is one of only a dozen examples of the word 'computer' in the data, which predates mainstream Web 1.0 times, mentions no cellular phones, the Internet, or a variety of countries established after 1990, including the EU. Previous work has shown that NLP systems retain strong lexical biases mirroring both period and author demographics (Shah et al., 2020), which in the case of HTB reflect primarily Israeli Ha'aretz journalists from 1990. + +A second concern beyond the period and author demographics is genre. HTB is a news corpus, and + +as such reflects formal journalistic writing, which focuses on describing prominent political events, sports news, and reported speech, usually in the past tense, but under-represents expository text, academic language, and colloquial spellings (which are much more variable in Hebrew than in English). The importance of genre diversity in training data has often been noted (Zeldes and Simonson, 2016; Müller-Eberstein et al., 2021), but without a second genre to test on, we simply do not know the extent to which fundamental Hebrew NLP performance degrades outside of HTB's language. + +In this paper, we attempt to broaden the range of data available for Hebrew by creating and evaluating a new, freely available (CC-BY-SA license), gold standard corpus using contemporary Wikipedia data. Although Wikipedia's language is also relatively formal, it differs substantially from newspaper reporting, and is more contemporary than HTB, covering a broad range of topics, while being available under an open license. Our main contributions are: + +1. A new TB of Hebrew Wikipedia data from several domains annotated for all UD layers, including morphological segmentation and features, POS tags and dependency trees, and report first UD Hebrew agreement scores +2. New SOTA results on the standard benchmark for Hebrew segmentation, tagging and parsing +3. The first cross-corpus evaluation of out-of-domain (OOD) Hebrew NLP across all UD tasks; we also perform error analysis indicating some issues with previous benchmark data +4. We release all code and trained models for the tools evaluated in Section 4, including new models for all tasks using the popular Stanza and Trankit libraries, as well as a new SOTA library tailored for Hebrew NLP + +# 2 Previous work + +In terms of material, there is only one existing TB of modern Hebrew prior to our work (Sima'an et al., 2001) based on 1990-91 issues of the newspaper Ha'aretz. However, there are multiple versions of + +this dataset, leading to some confusion. The original TB contained constituent trees, as well as word-internal segmentation and treebank-specific POS tags. This dataset was converted into dependencies first by Goldberg and Elhadad (2009) using a custom scheme, and later to an early version of UD by Tsarfaty (2013). The current UD HTB, using an older version of UD V2 guidelines which became invalid in 2018, is described in Sade et al. (2018), and is used below for evaluation using legacy tokenization. An updated version of this dataset released in this paper is based on the 2018 version. + +This work represents the second treebank ever produced for modern Hebrew, and the first cross-domain NLP evaluation of dependency parsing for the language. We briefly survey the state-of-the-art in UD Hebrew NLP in Section 4. + +# 3 The new corpus + +# 3.1 Contents + +Our new corpus, IAHLTwiKI, $^{3}$ created by the nonprofit Israeli Association of Human Language Technology (IAHLT), contains 5K sentences taken from Wikipedia, which were annotated by a team of 6 annotators, who all have either an undergraduate or graduate degree in Linguistics and a robust knowledge of Hebrew syntax, over 6 months using the Grew-Arborator tool (Guibon et al., 2020). During annotator training, a member of the UD core group was consulted whenever questions arose in a series of meetings. Data was sampled from 7 topic categories in order to increase lexical and structural diversity: biographies, events, places, health, legal, finance and miscellaneous (see Table 1). Inclusion of the last category is intended to introduce some random topic variation into the data, while the former categories were selected in consultation with a consortium of Israeli industry partners based on high interest applications for information extraction regarding people, places, healthcare, etc. + +Rather than sampling random sentences, full Wikipedia entries were selected in order to allow for future annotation projects at the document level, such as salient entities, coreference resolution, or other types discourse annotation. Domain data varies somewhat in size, though most domains cover roughly 20-30K tokens, with the exception of finance and event, to which we devoted less space + +
domaindocumentstokenssentences
bio521,963754
event416,202580
finance58,723299
health820,927824
law622,916788
place522,323829
misc627,895965
total39140,9495,039
+ +Table 1: Domains in the IAHLTwiKI corpus. + +based on our priorities. Together they were given the same total amount of data as a single domain. + +To illustrate some of the lexical differences between the older HTB news data and IAHLTwiKi, Table 2 shows words which are over/under represented in each dataset compared to the other (ratio skewed towards wiki in shades of blue, or HTB in red), taken from the top 50 words sorted by the frequency ratio, and constrained to include a maximum of 3 words not appearing in the other dataset. Some of the items on the left relate to period: '2018' is obviously not discussed in HTB, but neither is LGBT', which was a very new term in 1990 and not yet translated to Hebrew. Other items relate to domain: inclusion of medical articles leads to 33 occurrences of Penicillin' (0 in HTB) and 10 times more mentions of disease'. Other differences are stylistic: the spelling for the 3rd person singular feminine hayta 'was' is very common colloquially, but 'sub-standard' for Ha'retz, which always spells it with one 'y': nii. + +On the opposite side of the table, we also see period effects: 'Soviet' or 'Kuwait', which played important roles in world news in 1990; but also common items missing from Wikipedia, such as 'day before yesterday', a relative time term unlikely to appear in encyclopedic text, except perhaps in a quotation. Other underrepresented items interact with genre, such as 'week' (very common in narratives, less in exposition) or 'but' (more typical of argumentative writing). + +Overall our corpus contains around 14K lexical items, nearly 7K of which are missing from HTB, not only due to its topics, but also possibly due to the fact that Wikipedia is authored by diverse volunteers, and is not guided by a newspaper's stringent style policy and proofreaders. Sentences in the corpus also trend longer, which may be of value for parser performance on longer sentences: $M = 25.02$ and $SD = 14.27$ tokens in HTB, compared to + +$M = 27.97$ and $SD = 15.79$ in IAHLTwiKI. + +# 3.2 Changes from HTB + +The previous version of UD HTB has been invalid based on the official UD validation page since 2018, meaning several schema changes were needed to conform to the latest UD standards. To make comparison of the datasets and joint training possible, we release and evaluate a newly revised version of UD HTB which is valid by UD 2.10 release standards. This new version was initially created via automatic scripts using the DepEdit Python library (Peng and Zeldes, 2018) followed by manual post editing and validation, and nearly doubles the total amount of UD data for Hebrew, (over 11K sentences), allowing for cross-domain experimentation. + +Tokenization Besides correcting thousands of HTB errors and improving consistency, we introduce a major change to tokenization, which ensures that multiword tokens always correspond to the concatenation of their sub-tokens. In particular, we remove inserted possessive [sel] 'of' between nouns and their civic possessives (as in (3) for the MWT [beit-o] 'his house'), object marking [et] (accusative marker, as in (4) for [re'itiha] ('I) saw her') and orthographically unexpressed articles (as in (5) for [ba-bait] 'in the house'). + +(3) a. Old: [bait_shel_hu] +b. New: [beit o] + +(4) a. Old: [ra'iti et_ hi] +b. New: [re'iti ha] + +(5) a. Old: n-[-n-[be ha bait] + +b. New: [ba bait] + +In (3), the former HTB tokenization inserted the word shel 'of' surrounded by underscores before clitic possessors, as though the example reads 'house_of_him', rather than 'his house'. This complicates tokenization and introduces an unnecessary inconsistency with related languages in UD, such as UD Arabic, which does not insert an unexpressed preposition in the same construction, nor object markers in cases like (4). + +The last case in (5), i.e. removal of zero articles (which speakers can reconstruct from context while reading Hebrew but are not trivial to predict) is the only change resulting in a loss of information, and we therefore replace these by a standard UD morphological feature Definite=Def. As a result, + +
Under-represented in HTBUnder-represented in Wikipedia data
wordtranslationWikiHTBratiowordtranslationWikiHTB
penicillins3300tourism031
20182200day before yesterday029
LGBT300Kuwait026
was (3.sg.F)14110.007week8110
album18330.016game9115
songs12760.047but24207
disease3530.085police20149
Arabic3040.133member of parliament749
Greek2230.136dollar1389
credit1120.181Soviet1037
+ +Table 2: Words with striking frequency differences sorted by ratio of frequency in HTB vs. the Wiki data. Ratio for words overrepresented in Wikipedia is shaded blue, and for HTB in red. + +conversion between the new and old TB tokenization is deterministic in both directions, but the new tokenization style is both simpler and matches practices in related languages, most notably Arabic. + +Dependencies Some custom relation sub-types, which were predictable from the words they connect, have been removed in the new scheme, such as case:acc and case:gen for object and possessive markers, or mark:q for the question markers. + +# 3.3 Validation + +The standard UD toolkit4 contains a validation framework, but at the language-specific level it is quite limited, largely checking feature-POS combination sanity and permitted relations or auxiliaries. Ideally, each enforceable provision in language specific guidelines should be captured in machine-readable format, a vision we attempt to implement using grewv, an extension of the Drew "graph rewriting for NLP", a search and transformation engine for decorated graphs, originally designed for corpus exploration (Guillaume, 2021). Drew allows rigorous definition of graph patterns, incl. quantification and negation, which enables e.g. searching for verbs with no subject (Figure 1). We created a tool around grewv, dubbed grewv (for "grew validator"), which uses grew expressions to describe non-conformant trees and generate an error report in human- and machine-readable formats. + +As we encountered new phenomena in the data, annotators worked together to define guidelines, but also rules, consisting of a graph matching definition, a short name, a long-form message template, and an error level: 'error' or 'warning'. Rule templates support placeholders for tree-specific infor + +mation, such as token numbers. While errors result in validation failure, warnings can be dismissed, indicating that an annotator has reviewed and approved the data. An error-level message and pattern are shown in (6), which prohibits tokens from bearing the cc label for coordinating conjunctions if they are not the child of a coordinated token, with one Hebrew lexical exception for the expression 'both ... and'. + +(6) error: Token $\{\mathrm{matching}[\mathrm{nodes}][\mathrm{X}]\}$ has a cc child (token $\{\mathrm{matching}[\mathrm{nodes}][\mathrm{Y}]\}$ ), but is neither conj, parataxis nor root. + +pattern: $\{\mathrm{Y[lemma < > "}|\neg \rangle ;X - [cc] - > Y;}\}$ without $\{\ast$ -[conj]root|parataxis]->X; + +We integrated grewv into all stages of the project: continuous validation is used on the ruleset and at annotation time, and the treebank is passively scanned for trees requiring correction on updates. Changes to the ruleset are submitted, discussed, and reviewed over a mailing list, and proposed changes trigger an analysis job on our continuous integration system. Each change is checked for syntax errors and illustrative examples of trees which are supposed to pass and fail. Finally, a random subsample of outputs is inspected by an annotator to confirm that the rule works as expected. + +Confirmed changes are automatically deployed to our Arborator-Grew annotation instances; we have made modifications to track and report validation errors and warnings, and annotations cannot be marked 'final' until they have passed validation. As rules evolved, the treebank is passively scanned for 'stale' trees: annotations which passed validation at the time they were made, but now require correction in order to pass the latest ruleset. Periodically batches of stale trees are prepared for freshening, + +![](images/471dc14432e352fd2e275d20d11147a0ce85c114ac5614ed6a093c03b88f6f16.jpg) +Figure 1: A Grew search in HTB for subjectless verbs and a result. + +to ensure the treebank asymptotically approaches a consistent annotation level. + +# 3.4 Inter-annotator agreement (IAA) + +Table 3 gives aligned token accuracy using the official CoNLL scorer (Words) and Cohen's $\kappa$ for each annotation for $\sim 440$ double annotated sentences ( $\sim 12.6\mathrm{K}$ tokens) before adjudication and grewv validation. As this is the first Hebrew corpus annotated natively in UD, these are, to the best of our knowledge, the first reported IAA scores on full UD annotation for the language. Due to the nature of $\kappa$ , features beyond segmentation are computable only for identically tokenized words, however due to the high segmentation score this is negligible. + +
WordsLemmaUPOSFEATSHeadDeprelMisc
99.1%98.594.890.995.895.795.6
+ +Table 3: Word segmentation accuracy, and Cohen's kappa for each annotation layer. Misc stands for miscellaneous optional UD annotations, including e.g. CorrectForm for typos. + +The metrics indicate a very high level of agreement, with lowest scores on FEATS, where agreement holds only if all features agree (Gender, Number, Tense etc.); for $\kappa$ -scores on individual morphological features, see the Appendix. + +# 4 In-domain and cross-corpus NLP + +In this section we present a series of experiments on cross-corpus segmentation, tagging, lemmatization and parsing, which is made possible for the first time by the release of non-newswire treebank data. Several systems exist for UD parsing of Hebrew: in this section we compare two popular neural off the shelf, end-to-end UD systems: Stanza (Qi et al., 2020) and Trankit (Nguyen et al., 2021), the latter of which is the SOTA for Hebrew parsing; and the + +current SOTA system for non-concatenative Hebrew segmentation and tagging (Seker et al., 2022), which is the only one using a pre-trained Hebrew Transformer introduced in the same paper, AlephBERT, but which does not cover UD parsing based on the paper. We release and describe our own system for these tasks, called HebPipe below, which is a pipeline system building on the best existing systems with some incremental improvements. + +Segmentation All previous work on UD Hebrew parsing has been limited by the use of non-concatenative tokenization schemes. However with the conversion to the concatenative tokenization described in Section 3.2, we are able to take advantage of high accuracy characterwise segmentation approaches using binary classification (each character either begins a new segment or not). The previous characterwise SOTA system, RFTokensizer (Zeldes, 2018) is based on lexical features, such as looking up possible POS tags of MWT substrings around a segmentation point, and an XGBoost classifier, and has been applied to other morphologically rich languages, such as Arabic and Coptic (Zeldes and Schroeder, 2016); however it does not utilize pretrained transformers, which do not generally encode character level information (AlephBERT is word-piece based). Seker et al. (2022, 51) also mention that it is counterintuitive to use a word level model as input to character level tasks, but note that despite this their approach still achieves a high segmentation score. + +To improve the SOTA in segmentation, we combine the characterwise RFTokensizer classifier with + +the transformer from Seker et al. (2022) operating at the MWT word level. We train an LSTM using AlephBERT to predict whether each word contains prefixes requiring segmentation, suffixes, or both, thereby targeting an MWT-level property. We then feed these predictions to RFTokensizer as a feature, an approach not previously applied to the problem to the best of our knowledge. To avoid overfitting, we reserve half of the training data and the development set for the AlephBERT LSTM training, and the remaining half for the XGBoost classifier, which is fed the features described in Zeldes (2018) as well as the AlephBERT predictions (code attached to this paper). + +Tagging For POS tagging and features we employ a simple LSTM sequence tagger using the same pre-trained AlephBERT embeddings. We observe in development data that some tokens are ambiguous regarding part of speech depending on whether they are prefixes or suffixes: for example, the letter vav $\langle \mathrm{w} \rangle$ can stand for the word 'and', pronounced [ve] and tagged CCONJ, at the beginning of a MWT, or the word 'him/his', a pronoun pronounced [o] at the end of a MWT. If tokens are tagged as a naive sequence, these are occasionally mistagged in ambiguous contexts, for example: + +# (7) + +If the first two tokens are spelled together (הַלִר), then the example is a predicative verbless sentence ('his house is a palace'), and the letter 'w' is a pronoun (PRON). However if the last two tokens are spelled together (הַלִר), then 'w' is a coordinating conjunction (CCONJ), meaning 'and', and the translation would be 'a house and a palace'. Other errors can occur similarly, for example between an initial article ha- 'the' spelled the same as the suffix feminine possessive -a 'her'. + +To make the model aware of such cases, we concatenate a dense embedding using BIES notation to each token vector, indicating whether it begins a MWT (B), ends one (E), is inside one (I) or represents a single-token word (S). Using concatenative tokenization, we observe a degradation of $-0.32\%$ in end-to-end tagging accuracy on HTB by ablating this feature, which amounts to an error reduction of $\sim 12\%$ . Although this improvement is minor, it is not computationally complex, and we are hopeful that it could be a useful approach for other MRLs, such as Arabic. Tagger predictions are also fed into Stanza's lemmatizer to produce our lemmatization. + +Parsing For dependency parsing we rely on the prevalent biaffine approach proposed by (Dozat and Manning, 2017), using the implementation in DiaParser (Attardi et al., 2021), which adds intermediate transformer layer inputs to the transition classifier. We again rely on AlephBERT to represent input embeddings. + +We use the default hyperparameters for all components described above and do not perform hyperparameter optimization (see appendix for details). + +In-domain results Table 4 compares performance for the systems listed above. Results are computed using the official CoNLL scorer from the 2018 shared task on UD parsing, and with the exception of using gold sentence splits to match prior work on Hebrew, all numbers reflect realistic, end-to-end performance from plain text input, using the MWT F1 score for morphological segmentation, and AlignAcc for all other metrics. All numbers have been reproduced by retraining each system, except for Seker et al.'s system, for which trainable code is not provided. For HTB we report results using the concatenative tokenization, as well as with the older non-concatenative tokenization (in brackets) for comparability with previous work, with the official train-dev-test splits maintained. The splits for IAHLTwiKi were established by stratified sampling evenly across the domains described in Section 3.1. + +As the table shows, our approach achieves new SOTA scores for segmentation (MWT F1), lemmatization, parsing (UAS and LAS) as well as morphologically and lexically informed combined UD scores (CLAS, MLAS, BLEX) on HTB. For tagging, the result is very close - given that both Trankit and our system use a torch-based, transformer-driven sequence tagger, it appears that either Trankit's underlying XLM-RoBERTa model (Conneau et al., 2020) is superior to AlephBERT here, or we are looking at minor, random chance differences. Overall, the most notable advances in score are for segmentation - likely due to stacking RFTokensizer and AlephBERT, since the next best score is Seker et al.'s pure AlephBERT system - and for parsing, likely due to the use of a language specific BERT, which has not been reported on in previous papers for Hebrew. Here the difference to Trankit's XLM-RoBERTa result is very noticeable (+6.6 points), however we note that this folds in + +
train on HTB → test on HTBtrain on Wiki → test on Wiki
StanzaTrankitSeker et al.This paperStanzaTrankitThis paper
MWT F193.97 (92.82)97.27 (96.04)-(98.20)*98.81 (98.37)92.8794.6498.78
POS96.99 (97.12)97.46 (97.63)-(96.20)*97.34 (97.40)95.7696.9897.27
FEATS95.45 (95.65)85.95 (85.65)-(93.05)*91.68 (92.52)89.2390.6991.06
AllTags94.64 (94.85)85.20 (84.90)-91.06 (91.92)88.2489.5890.30
Lemma96.63 (96.89)96.69 (97.06)-97.52 (97.58)97.5194.7097.49
UAS85.62 (85.78)85.46 (85.60)-91.90 (92.07)82.9689.8192.19
LAS82.67 (82.88)82.82 (83.01)-89.42 (89.65)80.2287.6590.01
CLAS76.01 (75.93)83.96 (83.88)-84.48 (84.48)73.1683.9686.16
MLAS70.63 (68.97)63.00 (60.20)-72.24 (72.24)59.6469.3572.37
BLEX72.71 (72.65)79.80 (79.87)-80.99 (80.99)70.6676.0382.56
+ +Table 4: In-domain UD NLP performance for both datasets. Figures in brackets are for the older, non-concatenative tokenization. All numbers are end-to-end from plain text. A \* indicates numbers from the cited paper, other numbers are reproduced by the authors. + +gains from the superior segmentation quality, since all scores are end-to-end from plain text sentences. + +For IAHLTwiKi the results favor our approach even more, with the sole exception of lemmatization. However here our approach in fact uses Stanza itself, meaning the slightly lower score is probably due to random initialization differences. Although the absolute numbers for IAHLTwiKi are sometimes lower than for HTB's scores, we note that the Wiki data may not only be more challenging, but the use of the simplified tokenization shifts task difficulties: based on HTB, segmentation accuracy is higher across the board when using the simpler tokenization; however due to the need to represent orthographically unexpressed articles using morphological features and the lack of easy-to-tag inserted pseudo-tokens, metrics involving morphology may become lower. + +Cross-domain results In order to assess how reliably our results generalize to unseen data from a different source, we test the trained models from Table 4 on data from the 'other' corpus, using the same partitions (training still relies on each source corpus 'train' partition, and evaluation is on the same 'test' partition used in the previous table). For this cross-corpus evaluation, we report scores only on the new tokenization, since no previous scores are available for comparison for this setting, and systems are unable to learn non-concatenative MWT expansion from the new Wiki dataset. Table 5 shows the results for Stanza, Trankit and our system, HebPipe. + +As the table shows, HebPipe achieves the best cross-corpus generalization, primarily due to large gains in MWT F1, which reduce cascading errors in downstream tasks. Segmentation degradation + +is only around $0.25\%$ , meaning POS tagging and lemmatization are mainly vulnerable to OOV items, which are mitigated by AlephBERT's lexical coverage. Stanza and Trankit both suffer only a little over $1 - 2\%$ tagging degradation, but for parsing metrics degradation is much more substantial (around 5- $10\%$ LAS on HTB). + +Across the board, degradation is more substantial when predicting on HTB and especially on dependency metrics, which suggests that HTB contains more constructions not represented in the Wiki data. However, our error analysis suggests that a substantial portion of parsing degradation is owing to errors in the original HTB's conversion into UD (see Section 5 below). Overall, these results indicate that OOD performance on a second genre of formal, written Hebrew is fairly reliable for segmentation and tagging, but less so for parsing, and that the segmentation approach taken in this paper is particularly robust, possibly due to XGBoost's well known resistance to overfitting. We stress however that these results do not yet indicate performance quality on informal text types. + +Joint training In response to reviewer feedback in the rebuttal period, we were able to train joint models for each component, using both the HTB and IAHLTwiKI train set for training and the joint dev sets for early stopping via simple concatenation. Table 6 gives the scores for the jointly trained model. While we do not necessarily recommend using a joint model due to possible inconsistencies between the datasets which have not yet been resolved (see Section 5 below), the numbers indicate that the model is able to robustly deal with both test sets, with neither substantial gains nor degradation over in-domain training numbers. Importantly, the + +
train on Wiki → test on HTBtrain on HTB → test on Wiki
StanzaTrankitThis paperΔStanzaTrankitThis paperΔ
MWT F191.7992.2498.59-0.2291.7993.0398.51-0.27
POS94.0994.2795.25-2.0994.0995.0296.41-0.86
FEATS83.3581.1988.02-3.6683.3576.5090.06-1.00
AllTags82.1279.7286.97-4.0982.1275.1088.94-1.36
Lemma95.0590.9697.31-0.2195.0596.4197.750.26
UAS76.5082.5586.05-5.8576.5082.3290.37-1.82
LAS72.2678.2282.14-7.2872.2678.3487.35-2.66
CLAS63.6572.6276.29-8.1963.6577.6083.02-3.14
MLAS45.1947.2560.22-12.0245.1945.1067.90-4.47
BLEX58.6360.8872.86-8.1358.6372.8879.73-2.83
+ +model clearly outperforms cross-domain results, suggesting that it may indeed be overall the most robust choice for totally unseen data. We expect that further harmonization of the two datasets will increase the usefulness of joint training. + +Table 5: Out-of-domain UD NLP performance, training on one corpus and testing on the other, all numbers end-to-end from plain text. $\Delta$ gives the difference between HebPipe in-domain and out-of-domain performance. + +
HTBWikiHTBWiki
MWT F198.7898.65UAS91.1192.65
POS97.1497.29LAS88.2990.36
FEATS91.0290.93CLAS83.2786.65
AllTags90.3990.11MLAS70.5772.37
Lemmas97.6598.15BLEX80.1384.03
+ +# 5 Error analysis + +The discrepancy in NLP quality across corpora for different tasks, and especially lower OOD scores below the segmentation level, leads us to suspect that while segmentation in the two corpora is largely mutually consistent (after conversion of the tokenization scheme), other annotation layers may have some issues. Such issues could also explain why joint training does not outperform in-domain scores – improvements from having more training data could be cancelled out by inconsistencies. In order to better understand the most common errors, Table 7 gives the three most commonly confused label pairs for POS tags and dependency relations in both cross-corpus directions (for complete confusion matrices, see the Appendix). + +As the table shows, a substantial portion of errors is due to NOUN vs. PROPN confusions, which is not unusual in general (see Behzad and Zeldes 2020) but suspicious given that directions are flipped: testing on HTB we see over-prediction + +Table 6: UD NLP performance for a joint model, trained on both corpora. + +
train Wiki→ test HTBtrain HTB→ test Wiki
goldpredfreqgoldpredfreq
POSNOUNPROPN110PROPNNOUN160
VERBADJ86VERBAUX23
NOUNPROPN73ADJVERB21
deprelnmodobl77oblnmod73
compoundnmod70nmodobl72
oblnmod60compoundflat30
+ +Table 7: Top 3 confused tag and deprel pairs by corpus. + +of proper nouns, while testing on Wikipedia data shows the opposite trend. Manual inspection of the errors reveals that HTB's name annotations are inconsistent: names in the original UD HTB data, and especially place names, are tagged NOUN if they are transparently composed of multiple Hebrew nouns. Thus places like Tel Aviv (lit. 'Spring Mound') or Kfar Saba (lit. 'Grandfather Village') are not tagged as PROPN in HTB, but are PROPN in IAHLTwiKi, leading to prediction errors in both directions. ADJ vs. VERB confusions are also 'flipped', though less common, and are owing to a number of lexical items with different treatments, such as dome 'similar' and shone 'different', which are tagged as adjectives in the Wiki data, but as participates in HTB, based on their etymology. + +For dependency relations, most confusions are due to obl vs. nmod, which are annotated consistently in the data and correspond to verbal and nominal prepositional modifiers, i.e. the well known challenge of PP attachment (see Kawahara and Kurohashi 2005). Manual inspection confirms that these are less worrying in terms of corpus compatibility. Confusions involving the compound label are mainly due to errors in HTB, which include common items such as brit ha-mo'atsot 'Soviet Union' (occasionally labeled nmod instead of compound), and often carry an additional annota + +tion ConvUncertainLabel in the original data, indicating that this is an issue with the original conversion of the data to UD. We plan to address these and other inconsistencies in future releases of the data. + +# 6 Conclusion + +In this paper we make available, describe and evaluate annotation quality in the first non-newswire treebank for Hebrew, complementing the existing newswire data which is by now over 30 years old. We propose and implement a new, simpler tokenization scheme for Hebrew, and release a revised version of the existing UD Hebrew newswire corpus which follows the new scheme. Our evaluation of NLP systems on the new tokenization indicate that it is somewhat easier for a range of tokenizers, at the cost of slightly lower scores for downstream morphosyntactic annotations. We also evaluate agreement on the complete UD annotation pipeline for Hebrew for over 12K doubly annotated tokens, and build a novel infrastructure for error detection based on Grew. + +As part of our evaluation of NLP quality on the new resources, we compared several off the shelf systems, whose approaches we combine and incrementally improve to produce a new system, HebPipe, which is released with this paper. Our system achieves new SOTA results on most UD NLP tasks for the older HTB dataset, including segmentation, lemmatization and dependency parsing, evaluated on both the new and older tokenization schemes. We also reported on the first cross-corpus parsing experiment results, which indicated that segmentation and tagging using our system generalize well and substantially better than previous systems, while full dependency parsing is still less reliable out of domain. Error analysis indicates that some of the issues are caused by remaining compatibility problems across the corpora, which we plan to address in the future. + +In other future work, we are in the process of annotating data from further sources, including written language from blogs, social media and government websites, as well as spoken data from parliament proceedings and TV shows. To improve annotation, we hope to extend grewv to annotatormediated autocorrection of validation errors including choosing between multiple correction options, as well as machine-guided refinement of annotation guidelines, and guiding source selection for + +collecting new material. + +URLs for IAHLTwiKi, the revised UD HTB and complete code for the NLP tools from Section 4 will be included after review of this paper. + +# Limitations + +While the resources created for this paper substantially broaden the genres, domains and temporal diversity of data for Hebrew segmentation, tagging and parsing, they are still very limited. In particular, the newly released data reflects relatively formal and well edited language, and does not cover spoken language or informal user-generated content from the web, areas which we would like to explore in future work. Statements about generalizability and OOD reliability of tools should therefore be interpreted cautiously. Our results are only truly reliable for the specific case of the Hebrew datasets examined here, and may not apply to other domains or similar morphologically rich languages (MRLs), such as Arabic, with which we are also experimenting. + +The NLP tools compared in this paper also constitute a narrow and subjective selection – the choice of Stanza and Trankit was motivated by their popularity and the existence of previous models for Hebrew, while comparison with Seker et al.'s work was due to the paper's previous SOTA scores. It is possible that other architectures could outperform the results reported here, as well as reach different conclusions about generalizability and error sources. Due to the large number of pipeline components in the systems compared here, each with separate trained models for each corpus (18 models per experiment across 3 systems), and the focus of this paper on the corpus resource, we decided to only use seeds and not to obtain average scores from a large number of runs, which would require spending substantially more computing resources and promote current carbon-intensive trends in NLP. While we understand the desire for such numbers in papers focused on novel parsing architectures with very close numbers, we feel that they would not change the results presented here (though reviewers can let us know otherwise). + +For similar reasons, we performed no hyperparameter optimization on any of the systems, meaning that it is also possible that systems would fare differently if this were attempted, possibly making some of our incremental improvements, such as within-MWT BIES positional embeddings, un + +helpful. That said, we are skeptical that extensive optimization against single-genre dev sets is meaningfully useful for performance on unseen data in the wild, and we prefer to invest in developing systems which can be trained and run with moderate or no GPU resources (see also the Ethics Statement below). Optimizing pipeline architectures thoroughly would mean considering interactions between different components' models, for gains that would quite possibly be limited to the data used in the paper itself. + +Finally, the evaluation of our corpus focused on end-to-end numbers, rather than evaluating each task on gold inputs (i.e. parsing numbers reflect upstream tokenization errors, etc.), but did use gold sentence segmentation, to match the setup in previous SOTA work such as Seker et al. (2022). Especially given the limited space in this paper, we aimed to follow the UD shared task paradigm in preferring end-to-end scores, which more closely mirror expected performance in the wild. Although this complicates the interpretation of better and worse downstream component choices, we feel this is inevitable, since upstream outputs such as segmentation and tagging are also part of the input for downstream tasks, and a large number of evaluation scenarios is conceivable. By contrast, our inter-annotator agreement study required identical tokenization to compute kappa for annotations, meaning the lack of agreement numbers including tokenization disagreements is a further limitation of our results. This last issue likely has low impact on numbers due to the high agreement on tokenization, and we intend to release the doubly annotated raw data for interested researchers as well. + +# Ethics Statement + +This work contributes to open source and open access progress in NLP for morphologically rich languages, and specifically for Hebrew, which has not enjoyed the same wealth of resources as English and other European languages. We recognize that NLP research has a computing cost and carbon footprint, which motivates us to release all models in this work (preventing the need to retrain similar models), to use base-sized language models, and to avoid extensive hyperparameter optimization on these single-genre datasets, which may lead to minor improvements on test sets, but may or may not generalize to applications in the wild. + +We also recognize that NLP tools can be used to + +do harm, but expect that the type of NLP processing promoted here will do more good than harm by preventing tools from adhering closely to outdated and narrow-domain data, which this work aims to broaden. Given that systems for UD-style outputs for Hebrew already exist, we view any reduction in topical and authorial bias, as well as the public release of more resources, as net positives. All participants in this work have been compensated. The annotators (3 female, 3 male) were employed as regular employees of IAHLT, the non-profit organization which funded the treebank. Previous work has been credited to the best of our knowledge. + +# Acknowledgements + +We are grateful for the kind support of the Israel Innovation Authority and the Israel National Digital Agency. We would also like to acknowledge the immense help of the Israeli Association of Human Language Technologies (iahtt.org), its members and employees for supporting this work, as well as all of the annotators who participated in this project, without whose hard work this resource would not have been possible. + +# References + +Meni Adler and Michael Elhadad. 2006. An unsupervised morpheme-based HMM for Hebrew morphological disambiguation. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 665-672, Sydney. +Giuseppe Attardi, Daniele Sartiano, and Maria Simi. 2021. Biaffine dependency and semantic graph parsing for Enhanced Universal dependencies. In Proceedings of the 17th International Conference on Parsing Technologies and the IWPT 2021 Shared Task on Parsing into Enhanced Universal Dependencies (IWPT 2021), pages 184-188, Online. +Shabnam Behzad and Amir Zeldes. 2020. A cross-genre ensemble approach to robust Reddit part of speech tagging. In Proceedings of the 12th Web as Corpus Workshop (WAC-XII), pages 50-56, Marseille, France. +Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettle-moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8440-8451, Online. Association for Computational Linguistics. + +Timothy Dozat and Christopher D. Manning. 2017. Deep biaffine attention for neural dependency parsing. In Proceedings of ICLR 2017, Toulon, France. +Yoav Goldberg and Michael Elhadad. 2009. Hebrew dependency parsing: Initial results. In Proceedings of the 11th International Conference on Parsing Technologies (IWPT'09), pages 129-133, Paris, France. Association for Computational Linguistics. +Gael Guibon, Marine Courtin, Kim Gerdes, and Bruno Guillaume. 2020. When collaborative treebank curation meets graph grammars. In Proceedings of the 12th Language Resources and Evaluation Conference (LREC 2020), pages 5291-5300, Marseille, France. +Bruno Guillaume. 2021. Graph matching and graph rewriting: GREW tools for corpus exploration, maintenance and conversion. In Proceedings of EACL 2021 System Demos, pages 168-175, online. +Daisuke Kawahara and Sadao Kurohashi. 2005. Pp- attachment disambiguation boosted by a gigantic volume of unambiguous examples. In Proceedings of the 2nd International Joint Conference on Natural Language Processing (IJCNLP-05), pages 188-198, Berlin and Heidelberg. Springer. +Max Müller-Eberstein, Rob van der Goot, and Barbara Plank. 2021. How universal is genre in Universal Dependencies? In Proceedings of Treebanks and Linguistic Theories 2021, pages 69-85, Sofia, Bulgaria. +Minh Van Nguyen, Viet Lai, Amir Pouran Ben Veyseh, and Thien Huu Nguyen. 2021. Trankit: A lightweight transformer-based toolkit for multilingual natural language processing. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations, pages 80-90. +Pedro Javier Ortiz Suárez, Benoit Sagot, and Laurent Romary. 2019. Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures. In Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-7) 2019. Cardiff, 22nd July 2019, pages 9-16, Mannheim. Leibniz-Institut für Deutsche Sprache. +Siyao Peng and Amir Zeldes. 2018. All roads lead to UD: Converting Stanford and Penn parses to English Universal Dependencies with multilayer annotations. In Proceedings of the Joint Workshop on Linguistic Annotation, Multiword Expressions and Construction (LAW-MWE-CxG-2018), pages 167-177, Santa Fe, NM. +Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020. Stanza: A Python natural language processing toolkit for many human languages. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 101-108. + +Shoval Sade, Amit Seker, and Reut Tsarfaty. 2018. The Hebrew Universal Dependency treebank: Past present and future. In Proceedings of the Second Workshop on Universal Dependencies (UDW 2018), pages 133-143, Brussels, Belgium. +Djame Seddah, Sandra Kübler, and Reut Tsarfaty. 2014. Introducing the SPMRL 2014 shared task on parsing morphologically-rich languages. In First Joint Workshop on Statistical Parsing of Morphologically Rich Languages and Syntactic Analysis of Non-Canonical Languages, pages 103-109, Dublin, Ireland. +Amit Seker, Elron Bandel, Dan Bareket, Idan Brusilovsky, Refael Greenfeld, and Reut Tsarfaty. 2022. AlephBERT: Language model pre-training and evaluation from sub-word to sentence level. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 46-56, Dublin, Ireland. +Deven Santosh Shah, H. Andrew Schwartz, and Dirk Hovy. 2020. Predictive biases in natural language processing models: A conceptual framework and overview. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5248-5264, Online. Association for Computational Linguistics. +Khalil Sima'an, Alon Itai, Yoad Winter, Alon Altman, and Noa Nativ. 2001. Building a tree-bank of Modern Hebrew text. *Traitment Automatique des Langues*, 42:347-380. +Reut Tsarfaty. 2013. A unified morpho-syntactic scheme of Stanford dependencies. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 578-584, Sofia, Bulgaria. Association for Computational Linguistics. +Amir Zeldes. 2018. A characterwise windowed approach to Hebrew morphological segmentation. In Proceedings of the 15th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 101-110, Brussels, Belgium. +Amir Zeldes and Caroline T. Schroeder. 2016. An NLP pipeline for Coptic. In Proceedings of the 10th ACL SIGHUM Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities (LaTeCH2016), pages 146-155, Berlin. +Amir Zeldes and Dan Simonson. 2016. Different flavors of GUM: Evaluating genre and sentence type effects on multilayer corpus annotation quality. In Proceedings of the 10th Linguistic Annotation Workshop (LAW X), pages 68-78, Berlin. + +# A Agreement on morphological features + +Table 8 details kappa scores for each morphological feature category exhibiting more than 10 disagreements, including both universal UD features, and + +Hebrew-specific ones, such HebBinyan (the morphological inflectional class of Hebrew verbs). The IAA study was performed before the introduction of the grewv validation system, and running the system on the double annotated data reveals that in over $29\%$ of the disagreements (387 of 1,297 cases) at least one of the annotators' decision would have been flagged as an error, based on the rules in Table 9. This result underscores the importance of automatic validation, and the possible lesson that future IAA studies should report the impact of annotators on potential disagreement errors that would be prevented by validation. + +
LabelKappaDisagreements
Case0.90282
Definite0.966123
Foreign0.43413
Gender0.954281
HebBinyan0.96056
Number0.959238
NumType0.68848
Person0.96665
Polarity0.70948
Prefix0.59211
PronType0.97668
Tense0.96842
Typo0.48215
VerbForm0.93737
VerbType0.70331
Voice0.900139
average0.81881.06
total1297
+ +# B Technical information and reproducibility + +The AlephBERT transformer model by Seker et al. (2022) used for most of the components in this paper is available from huggingface and was trained over approximately 8 days on a DGX machine (8 V100 GPUs) on close to 18 GB of text from the OSCAR corpus (Ortiz Suárez et al., 2019), Hebrew Wikipedia and Hebrew Twitter. The model uses the standard 12 layer transformer with 768 dimensional word-piece representations or approximately 110 million parameters, with a vocabulary of 52K types. + +The flair taggers described above were trained with a biLSTM CRF (n Hidden=256) on top of AlephBERT transformer word embeddings, concatenated with 5 dimensional MWT positional dense embeddings for tagging, and 17 dimensional POS + +Table 8: Cohen's Kappa and the total number of disagreements for each feature and miscellaneous label. + +
Rule nameFeatureDifferences
verb-mand-voiceVoice42
pass-argVoice32
verb-mand-binyanHebBinyan28
def-consDefinite25
binyan-nomidVoice,HebBinyan22
pron-prontype-requiredPronType18
aux-nopolarityPolarity17
amod-onlyadj-15
aux-binyanHebBinyan13
adp-case-genCase13
yesh-polarityPolarity12
pron-nopolarityPolarity12
noun-adj-gender-agrGender11
nmod-obl-caseCase11
mandatory-genderGender11
cc-child-conj-11
noun-adj-num-agrNumber10
mandatory-numberNumber10
(other types)-74
total1,297
+ +Table 9: Frequency of validation errors in our inter-annotator agreement study identified by grewv, broken down by type for errors of frequency $\geq 10$ . Annotations made after the introduction of our validation system would forbid these disagreements. + +embeddings as inputs for morphological features. In all cases we used the default flair hyperparameters with a learning rate of 0.1, optimizing with SGD using a mini-batch size of 15 and halting on dev set accuracy with a patience of 3. + +The Diaparser model, also using default hyperparameters, combines fixed AlephBERT embeddings (no fine-tuning) with randomly initialized, fine-tunable fixed embeddings (100 dimensions), which are all fed into a 200-dimensional biLSTM topped by arc and dependency relation MLPs with biaffine attention (500 and 100 dimensional respectively), for a complete parser model with approximately 12M trainable parameters. All training was carried out on a consumer laptop (Dell XPS15, Intel Core i9-9980HK CPU@2.40GHz, 64 GB RAM, NVIDIA GeForce GTX 1650, 4GB GPU RAM). + +For Stanza and Trankit's architectures, as well as RFTokensizer and Seker et al.'s system, we refer readers to the original published papers and tool documentations. + +# C Label distributions + +In compliance with the EMNLP reproducibility checklist, Table 10 gives the exact breakdown of POS tag labels in each dataset. + +# D Confusion Matrices + +The confusion matrices in Figure 2 below show predicted vs. gold label frequencies for POS tagging + +
IAHLTwiKiHTB
ADJ1,7111,256
ADP21,00521,058
ADV1,5291,035
AUX9561,014
CCONJ1,7061,976
DET11,17711,587
INTJ43
NOUN31,62431,003
NUM1,1261,220
PRON1,6331,015
PROPN11,4481,181
PUNCT11,61311,301
SCONJ1,3171,792
SYM1461
VERB11,65011,359
X304118
+ +Table 10: POS label distributions. + +and dependency relations in the cross-corpus experiments using this paper's best system, HebPipe. All numbers are computed using the official CoNLL UD scorer and end-to-end predictions from plain text, taking the scorer's optimal alignment as the basis for matching predictions and gold token labels, which is non-trivial due to occasional differences in the predicted segmentation of words into tokens. + +![](images/5d0e7440b53dda7cdfe5544bf0fe3eef2a36d937b253d4c0fb6970c4ed80a661.jpg) +(a) train: Wiki $\rightarrow$ test: HTB + +![](images/446aa2c3264339fe873418d35af7e8ecaf214abdc2df17ffa54461286e1b4bd3.jpg) + +![](images/439005cfa474f9224dde869690dadecce599873e58b5d1cc0e6d7f5d5c17eca7.jpg) +(b) train: HTB $\rightarrow$ test: Wiki + +![](images/dcc71e06195ddaea61d42533c1465475cb95581f4d89896694fa3bea3b33d8d0.jpg) +Figure 2: Confusion matrices for cross-corpus POS tag and dependency relation predictions in both directions. Top: training on HTB newswire and predicting on IAHLTwiKI, bottom: training on IAHLTwiKI and predicting on HTB. \ No newline at end of file diff --git a/asecondwaveofudhebrewtreebankingandcrossdomainparsing/images.zip b/asecondwaveofudhebrewtreebankingandcrossdomainparsing/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..f3baa1aac59ccfef9cc0dd47143e1ea5cac2ff71 --- /dev/null +++ b/asecondwaveofudhebrewtreebankingandcrossdomainparsing/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5de6e5e23ad155479533729bbb451ff5d5c6a64eaddd715be4a89a4c12c35eb0 +size 765376 diff --git a/asecondwaveofudhebrewtreebankingandcrossdomainparsing/layout.json b/asecondwaveofudhebrewtreebankingandcrossdomainparsing/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..6eda344e7eb8060f345a450f095074e767d3cdb1 --- /dev/null +++ b/asecondwaveofudhebrewtreebankingandcrossdomainparsing/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fad7dad38cc57b2aebc6aa73aadbd767fde3ca14751021d64875d46154e8a560 +size 347039 diff --git a/asequentialflowcontrolframeworkformultihopknowledgebasequestionanswering/e707a2be-c2f8-4ad5-84c0-5ab9699a3cf0_content_list.json b/asequentialflowcontrolframeworkformultihopknowledgebasequestionanswering/e707a2be-c2f8-4ad5-84c0-5ab9699a3cf0_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..620da6bd71a454599e2620c6cfa1ede3ebbd2281 --- /dev/null +++ b/asequentialflowcontrolframeworkformultihopknowledgebasequestionanswering/e707a2be-c2f8-4ad5-84c0-5ab9699a3cf0_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:06ec083852c9636d32518f4e29b83c8b6a174479b7565d521e88061f162c0176 +size 75150 diff --git a/asequentialflowcontrolframeworkformultihopknowledgebasequestionanswering/e707a2be-c2f8-4ad5-84c0-5ab9699a3cf0_model.json b/asequentialflowcontrolframeworkformultihopknowledgebasequestionanswering/e707a2be-c2f8-4ad5-84c0-5ab9699a3cf0_model.json new file mode 100644 index 0000000000000000000000000000000000000000..5a8daf8740f1db6a81bc444186a747431ec0adce --- /dev/null +++ b/asequentialflowcontrolframeworkformultihopknowledgebasequestionanswering/e707a2be-c2f8-4ad5-84c0-5ab9699a3cf0_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:96b38ba800634e788022ec68c5dcb0a478c99f00485e63a5a031d16e4b7c09f6 +size 92325 diff --git a/asequentialflowcontrolframeworkformultihopknowledgebasequestionanswering/e707a2be-c2f8-4ad5-84c0-5ab9699a3cf0_origin.pdf b/asequentialflowcontrolframeworkformultihopknowledgebasequestionanswering/e707a2be-c2f8-4ad5-84c0-5ab9699a3cf0_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ff1d17aeb81276b3f0c392b0dd7949489d572686 --- /dev/null +++ b/asequentialflowcontrolframeworkformultihopknowledgebasequestionanswering/e707a2be-c2f8-4ad5-84c0-5ab9699a3cf0_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aa4f2bdbb74ee47765fa3be30a0a17ed5e5cf99436b22b913e003a1b504e61cd +size 1642887 diff --git a/asequentialflowcontrolframeworkformultihopknowledgebasequestionanswering/full.md b/asequentialflowcontrolframeworkformultihopknowledgebasequestionanswering/full.md new file mode 100644 index 0000000000000000000000000000000000000000..8b25790e2bf194472b3598568a43e1547eb39899 --- /dev/null +++ b/asequentialflowcontrolframeworkformultihopknowledgebasequestionanswering/full.md @@ -0,0 +1,343 @@ +# A Sequential Flow Control Framework for Multi-hop Knowledge Base Question Answering + +Minghui Xie, Chuzhan Hao, and Peng Zhang* + +College of Intelligence and Computing, Tianjin University + +{minghuixie, chuzhanhao, pzhang}@tju.edu.cn + +# Abstract + +One of the key challenges of knowledge base question answering (KBQA) is the multi-hop reasoning. Since in different hops, one attends to different parts of question, it is important to dynamically represent the question semantics for each hop. Existing methods, however, (i) infer the dynamic question representation only through coarse-grained attention mechanisms, which may bring information loss, (ii) and have not effectively modeled the sequential logic, which is crucial for the multi-hop reasoning process in KBQA. To address these issues, we propose a sequential reasoning self-attention mechanism to capture the crucial reasoning information of each single hop in a more fine-grained way. Based on Gated Recurrent Unit (GRU) which is good at modeling sequential process, we propose a simple but effective GRU-inspired Flow Control (GFC) framework to model sequential logic in the whole multi-hop process. Extensive experiments on three popular benchmark datasets have demonstrated the superior effectiveness of our model. In particular, GFC achieves new state-of-the-art Hits@1 of $76.8\%$ on WebQSP and is also effective when KB is incomplete. Our code and data are available at https://github.com/Xie-Minghui/GFC. + +# 1 Introduction + +Knowledge base question answering (KBQA) aims to answer questions from structured knowledge bases. In real application scenarios of KBQA, reasoning with multiple hops over knowledge graph (KG) is necessary for answering complex questions. Therefore, how to perform multi-hop reasoning effectively becomes a key challenge for multi-hop KBQA task (Sun et al., 2018; Zhang et al., 2018; Ho et al., 2020; Shi et al., 2020; Han et al., 2020). + +Existing methods for multi-hop KBQA have three main strands. The first is semantic parsing + +![](images/2a7868700854fb70bf6a3e9e5ef813927ec5b7593d6f8b8724a138ebf08a8f40.jpg) + +![](images/567e375598c118927e7ae3693eda80fa7ddfaccc8fd2bb7cb956e1c2325cbb94.jpg) +Figure 1: The above picture shows relations attention weights on the reasoning paths of GFC and the strong path-based method TransferNet. The final entity scores are the weighted sum of two hops which are positive correlation with relation attention weights. TransferNet tends to give $r_1$ high score in the 2nd hop, thus obtaining wrong answer (right). GFC can effectively weaken the attention of $r_1$ in the 2nd hop by introducing GRU-like sequential logic into the multi-hop process (left). People tend to pay more attention to current relations while pay less attention to past relations. Thus GFC is more consistent with human reasoning habit. + +based methods, which generate query graphs or statements by parsing questions (Yih et al., 2015; Luo et al., 2018; Lan and Jiang, 2020). The second is embedding-based methods which score the embeddings of question objectives and candidate answers (Dong et al., 2015; Miller et al., 2016; Hao et al., 2017; Saxena et al., 2020). The third is path-based methods, which start from topic entities of question and walk on KG to find answers. The third direction has its own advantages in terms of interpretability and extensibility (Sen et al., 2021). In recent years, more and more works have focused on path-based multi-hop reasoning methods (He et al., 2021; Sen et al., 2021; Shi et al., 2021). + +However, existing methods still face some critical problems. First, path-based methods and some embedding-based methods usually leverage coarse-grained attention mechanisms to capture reasoning + +information of each hop. For example, KVMemNN (Xu et al., 2019) adopts cross-attention between key-value memory and sentence-level question representation. IRN (Zhou et al., 2018) uses the sentence-level question representation to eliminate relation embeddings of previous hop. Some methods (He et al., 2021; Shi et al., 2021) adopt cross-attention between the sentence-level question representation and question tokens. However, compressing all the necessary information into the sentence-level representation may lose some crucial information. Although these methods have achieved good performance, there is still room for improvement. + +Second, they lack modeling sequential logic effectively in the whole multi-hop process. Humans often reason sequentially and consider past and present information comprehensively, which is a kind of sequential logic. However, the dynamic question representation of each hop is relatively independent (Cohen et al., 2020; Shi et al., 2021). And they do not control information flow effectively in different hops. For example, in Figure 1, models need to inhibit past relations for getting the right answer. However, existing methods cannot do this well. + +In response, we propose a novel model for multi-hop KBQA, dubbed GFC. First, we design a sequential reasoning self-attention mechanism to obtain more fine-grained reasoning information of each hop. Our update mechanism combines the self-attention mechanism with sequential logic in the reasoning scenario. It can capture more nuanced reasoning information to distinguish similar relations on KG. Second, we design a simple but effective GRU-inspired flow control framework to model the sequential logic in the whole multi-hop process more effectively. This framework controls reasoning information flow among different hops, which enables GFC to consider reasoning information of past and present comprehensively. Besides, it tactfully integrates the proposed update mechanism into itself through our heuristic thinking about GRU. Inspired by the gating mechanism of GRU, we also introduce a self-gate unit to filter out redundant past reasoning information. As integral parts of framework, these mechanisms further enhance the capability of the overall flow control framework. Our key contributions are as follows: + +- We design a sequential reasoning selfattention mechanism to extract the crucial reasoning information of single hop in a more + +fine-grained way. + +- We propose a GRU-inspired flow control framework to model the sequential logic in the whole multi-hop process more effectively. +- Through controlling reasoning flow among hops and our novel update mechanism, GFC is superior to most existing methods. Specially, GFC achieves new state-of-the-art Hits@1 result of $76.8\%$ on WebQSP and is also highly effective when KB is incomplete. + +# 2 Related Work + +In this paper, we mainly focus on neural network based methods. + +# 2.1 Path-based Methods + +These methods usually infer hop by hop over knowledge graph. Thus they can produce the reasoning chains to provide better interpretability. + +Differentiable Knowledge Graph These methods use a sparse-matrix reified KB proposed by ReifKB (Cohen et al., 2020) to represent a symbolic knowledge base. The reasoning process is formulated as the multiplication of entity vector and relation matrix. E2EQA (Sen et al., 2021) handles multiple-entity questions by intersecting answers of different topic entities. TransferNet (Shi et al., 2021) proposes an effective and transparent framework. These methods need no retraining for new entities and are easy to apply in large knowledge graph because they encode entities as one-hot embeddings. However, they lack modeling sequential logic in the whole multi-hop process effectively. + +Reinforcement Learning These methods view the multi-hop reasoning process as a multi-step decision making process using reinforcement learning. MINERVA (Das et al., 2018) and SRN (Qiu et al., 2020) define states as tuple of question and entities, actions as traverse operation from the current entity on knowledge graph. NSM (He et al., 2021) uses teacher network to provide weak intermediate supervision signals of reasoning paths to student network. Although they have strong interpretability, they usually suffer from the convergence issue due to the huge search space and are harder to train compared to other approaches. + +# 2.2 Embedding-based Methods + +KVMemNN (Miller et al., 2016) reads key-value memory iteratively to conduct multi-hop reasoning. + +EmbedKGQA (Saxena et al., 2020) utilizes KG embeddings to score question and candidate answers. GraftNet (Sun et al., 2018) and PullNet (Sun et al., 2019) retrieve a question-specific subgraph and then use graph convolutional network (Kipf and Welling, 2017) to implicitly infer answers. They enjoy high recall but suffer from much noisy entities. They have relatively weak interpretability because they usually cannot produce the reasoning chains. + +# 3 Methodology + +The diagram of our proposed model GFC is shown in Figure 2. The task of KBQA is to find the answer entities for natural language question $q$ with the help of a relation graph $\mathcal{G}$ . The entities mentioned in a question are called topic entities. Starting from topic entities, we derive the gold answer entities through the multi-hop reasoning on knowledge graph. Our proposed model GFC adopts a sparse-matrix reified KB proposed by RefiedKB (Cohen et al., 2020) to represent symbolic knowledge base. This representation method enables our model to perform rapid calculations on large scale knowledge graphs and need no retraining for new entities. + +# 3.1 Sequential Reasoning Self-Attention Mechanism + +To capture the crucial reasoning information in a more fine-grained way and alleviate the loss of crucial reasoning information of each hop, we combine the self-attention mechanism with the sequential logic in the multi-hop process. Specially, we view the initial question representation as query, and the question representation of previous hop as key and value. Given the initial question representation $H^{0}$ and the question representation $H^{t - 1}$ at hop $t - 1$ ( $t \in [1, T]$ ), we firstly transform $H^{t - 1}$ and then compute attention matrix $S$ with $H^{0}$ . After that, we do row-wise softmax on $S$ to figure out which parts of $H^{t - 1}$ are more important in current hop. The processed matrix is noted as $S_{q}$ . Then we apply the computed attention matrix $S_{q}$ to $H^{t - 1}$ to obtain the crucial reasoning information $\tilde{U}^{t}$ . The detailed computing process is as follows: + +$$ +\boldsymbol {S} = \mathcal {F} ^ {k} \left(\boldsymbol {H} ^ {t - 1}\right) \times \boldsymbol {H} ^ {0} \tag {1} +$$ + +$$ +\boldsymbol {S} _ {q} = \text {r o w - w i s e s o f t m a x} (\boldsymbol {S}) \tag {2} +$$ + +$$ +\tilde {\boldsymbol {U}} ^ {t} = \boldsymbol {H} ^ {t - 1} \times \boldsymbol {S} _ {q} \tag {3} +$$ + +where $\mathcal{F}^k$ denotes a linearly connected layer, $\{H^0,H^{t - 1},\tilde{U}^t\} \in \mathbb{R}^{L\times d}$ and $\{S,S_q\} \in \mathbb{R}^{L\times L}$ . + +![](images/3e1705833a065f66d83189bc3c42a7656d9690dd4180bbffe674f13b923a0a02.jpg) +Figure 2: Overall architecture of our proposed GFC model. + +is the length of question and $d$ is the hidden size. + +At the first hop, $H^{t-1}$ equals $H^0$ . In this case, this process is a vanilla self-attention mechanism. As the reasoning process goes on, $H^{t-1}$ is no longer equal to $H^0$ , which means we use the question representation of previous hop and the initial question representation to capture the crucial reasoning information through the self-attention mechanism. Therefore, we call this process the sequential reasoning self-attention mechanism. + +# 3.2 GRU-inspired Flow Control Framework + +After capturing the crucial reasoning information of current hop, how to control the reasoning information flow is crucial in modeling the sequential logic in the whole multi-hop process effectively. Motivated by the Gated Recurrent Unit (GRU) (Cho et al., 2014), we propose a simple but effective reasoning information flow control framework. Here is how we get inspired. The main part of GRU is as follows: + +$$ +\tilde {h} ^ {t} = \tanh \left(\boldsymbol {W} _ {h} x ^ {t} + \boldsymbol {U} _ {h} \left(r ^ {t} \odot h ^ {t - 1}\right) + b _ {h}\right) \tag {4} +$$ + +$$ +h ^ {t} = z ^ {t} \odot h ^ {t - 1} + (1 - z ^ {t}) \odot \tilde {h} ^ {t} \tag {5} +$$ + +where $x^t, h^{t-1}$ and $\tilde{h}^t$ are the input, the previous hidden state and the new hidden state, respectively. $r^t$ and $z^t$ are the reset gate and update gate, respectively. $W_h, U_h$ and $b_h$ are trainable parameters. + +Analogy to the above formulas, we view the crucial reasoning information $\tilde{U}^t$ heuristically as the new hidden state $\tilde{h}^t$ because $\tilde{U}^t$ is also updated information like $\tilde{h}^t$ . This means our sequential reasoning self-attention mechanism plays the same role as Eq. 4. Similar to Eq. 5, we synthesize the past and present reasoning information by introducing the gate mechanism. As pointed out in Cho et al. (2014), the update gate $z^t$ selects whether the hidden state is to be updated with a new hidden state $\tilde{h}^t$ while the reset gate $r^t$ decides whether the previous hidden state $h^{t - 1}$ is ignored. In our sequential reasoning self-attention mechanism, $\tilde{U}^t$ is the crucial reasoning information of current hop. Therefore, we do not need an update gate $z^t$ but a reset gate $r^t$ to decide how much the past reasoning information is retained. Therefore, we deduce the following equation: + +$$ +\boldsymbol {U} ^ {t} = r ^ {t} \odot \boldsymbol {U} ^ {t - 1} + \tilde {\boldsymbol {U}} ^ {t} \tag {6} +$$ + +To achieve the effect of the reset gate $r^t$ , we introduce the self-gate unit (SGU). Figure 3 illustrates the architecture of SGU. + +![](images/c3a289238479882b03ad08934d5c73391ed0da484284d8aef1bcf1ed167f635a.jpg) +Figure 3: Self-Gate Unit (SGU). + +The SGU will get the internal attention distribution for eliminating the irrelevant information of previous reasoning information $U^{t - 1}$ . The detailed process is as follows: + +$$ +\mathbf {S G U} \left(\boldsymbol {U} ^ {t - 1}\right) = \boldsymbol {T} \left(\boldsymbol {U} ^ {t - 1}\right) \odot \boldsymbol {U} ^ {t - 1} \tag {7} +$$ + +$$ +T \left(\boldsymbol {U} ^ {t - 1}\right) = \sigma \left(\boldsymbol {U} ^ {t - 1} \boldsymbol {W} _ {1} + \boldsymbol {b} _ {1}\right) \tag {8} +$$ + +where $T(\cdot)$ indicates the transform gate, $\sigma (\cdot)$ is the element-wise sigmoid function that confines the + +point-wise weights into a fixed range. $\odot$ denotes the Hadamard product. $W_{1}\in \mathbb{R}^{d\times d}$ and $b_{1}\in \mathbb{R}^{d}$ are trainable parameters. Thus the final reasoning information $U^t$ of current hop is calculated as follows: + +$$ +\boldsymbol {U} ^ {t} = \boldsymbol {S G U} \left(\boldsymbol {U} ^ {t - 1}\right) + \tilde {\boldsymbol {U}} ^ {t} \tag {9} +$$ + +To alleviate large semantic deviation in the whole multi-hop process, we add the reasoning information $U^t$ to the initial question semantics. Finally, the dynamic question representation of each hop in our model is computed as follows: + +$$ +\boldsymbol {H} ^ {t} = \boldsymbol {H} ^ {0} + \boldsymbol {S G U} \left(\boldsymbol {U} ^ {t - 1}\right) + \tilde {\boldsymbol {U}} ^ {t} \tag {10} +$$ + +As shown in Figure 4, our proposed framework is similar to the architecture of GRU (Cho et al., 2014) and Bert (Devlin et al., 2019), which can be viewed as a fusion of two powerful NLP models approximatively. This inspires us to model the multi-hop reasoning process in the same way that we model language sequences. + +![](images/1331839bb569ad22f03fea1103affd2714f64541ba73fd2f329e95f0f56daa51.jpg) +Figure 4: The schematic diagram of the GRU-inspired Flow Control Framework. + +# 3.3 Fusion and Reasoning Module + +After getting the fine-grained dynamic question representation, we use it to determine which relations we should walk on knowledge graph in current hop. In detail, we sum the cross attention matrix $S_{q}$ in column direction and then do softmax to obtain the weight of each token. Then we fuse the dynamic question representation $H^{t}$ using these weights to get the question vector $q^{t} \in \mathbb{R}^{d}$ for relation prediction. The computing process is as follows: + +$$ +\boldsymbol {q} ^ {t} = \boldsymbol {H} ^ {t} \times \operatorname {s o f t m a x} \left(\sum \mathrm {S} _ {\mathrm {q}}\right) \tag {11} +$$ + +Then we use $q^t$ to make a multi-label classification on the relations of knowledge graph, which makes our model can lookup multiple paths in parallel on knowledge graph: + +$$ +\boldsymbol {r} ^ {t} = \operatorname {s i g m o i d} \left(\mathcal {F} ^ {r} \left(\boldsymbol {q} ^ {t}\right)\right) \tag {12} +$$ + +where $\mathcal{F}^r$ is a linear fully connected layer. The follow operation will calculate the scores of all entities in the $t$ hop. The resulting entity vector $e_t$ of each hop is computed as: + +$$ +\mathbf {e} ^ {t} = \operatorname {f o l l o w} \left(\mathbf {e} ^ {t - 1}, \mathbf {r} ^ {t}\right) \tag {13} +$$ + +where $e^t \in [0,1]^n$ is the scores of all entities in the $t$ hop. $e^0$ is the initial score where only the topic entities get 1. + +# 3.4 Output Layer + +At the end of all $T$ hops, we will calculate the multi-hop attention distribution $a \in \mathbb{R}^T$ to determine which hop answers are located in. We argue that the question semantics of $t$ hop is wrong if the right answers can be obtained within $t - 1$ hop. Thus we collect the dynamic question representations of all hops to calculate the multi-hop attention score: + +$$ +\boldsymbol {a} = \operatorname {s o f t m a x} \left(\mathcal {F} ^ {h} \left(\left[ q ^ {1}; \dots ; q ^ {T} \right]\right)\right) \tag {14} +$$ + +where $\mathcal{F}^h$ denotes a linear fully connected layer. + +The final predicted answers $\hat{y}$ are computed as: + +$$ +\hat {\mathbf {y}} = \sum_ {t = 1} ^ {T} a ^ {t} e ^ {t} \tag {15} +$$ + +Given the golden answer set $y$ , we take the $L2$ Euclidean distance between $\hat{y}$ and $y$ as our training objective: + +$$ +\mathcal {L} = | | \hat {y} - y | | \tag {16} +$$ + +Since the framework is totally differentiable, we can learn all the intermediate probability values via this simple goal. + +# 4 Experiments + +# 4.1 Datasets + +MetaQA (Zhang et al., 2018) is a large-scale dataset of multi-hop KBQA with more than 400k questions, which are generated using dozens of templates and have up to 3 hops. Its knowledge graph is from the movie domain, including 43k entities, 9 predicates and 135k triples. Each sample has a corresponding hop label. + +WebQSP (Yih et al., 2016) is a subset of WebQuestions and completes the corresponding query statement. It contains 4,737 questions (2,998 train, 1,639 test) based on Freebase (Bollacker et al., 2008) which has millions of entities and triples. These questions can be solved under the reasoning chain of 1 hop or 2 hops. Following Saxena et al., 2020), we pruned the KB to contain only mentioned relations and within 2-hop triples of mentioned entities. In order to improve the reasoning ability, we add reversed predicates. Finally, the KB includes 1.8 million entities, 1144 predicates and 11.4 million triples. + +CompWebQ (Talmor and Berant, 2018) is a further enhanced version of WebQSP with 34,689 questions (27,649 train, 3,509 dev, 3,531 test). It contains more complex multi-hop questions, mainly including type constraints, display or implicit time constraints and aggregation operations. + +# 4.2Baselines + +- KVMemNN (Miller et al., 2016) uses the key-value memory to store triplet knowledge and conducts multi-hop reasoning by reading the memory iteratively. +- GraftNet (Sun et al., 2018) uses Personalized PageRank method to extract a question-specific subgraph and then infers answers using graph nerual network. +- PullNet (Sun et al., 2019) uses an iterative process to construct a question-specific subgraph and infers with heterogeneous information to find the best answers. +- ReifKB (Cohen et al., 2020) proposes a sparse-matrix reified KB to represent a symbolic knowledge base, which can be trained in an end-to-end way. +- **EmbedKGQA** (Saxena et al., 2020) utilizes the link predict ability of KG embeddings (Bordes et al., 2013; Trouillon et al., 2016) to handle multi-hop reasoning questions, especially on incomplete knowledge graph. +- EMQL (Sun et al., 2020) proposes set operators to construct a more faithful query method for deductive reasoning. +- NSM (He et al., 2021) proposes teacher network to provide weak supervision signals of + +
ModelMetaQAWebQSPCompWebQ
1-hop2-hop3-hopHits@1F1Hits@1
Embed-basedKVMemNN (Miller et al., 2016)96.282.748.946.738.621.1
GraftNet (Sun et al., 2018)97.094.877.767.862.432.8
PullNet (Sun et al., 2019)97.099.991.468.1-47.2
EmbedKGQA (Saxena et al., 2020)97.598.894.866.6--
EMQL (Sun et al., 2020)-98.699.175.5--
Path-basedReifiedKB (Cohen et al., 2020)96.281.172.352.7--
NSM (He et al., 2021)---74.367.4-
TransferNet (Shi et al., 2021)97.5100.0100.071.4-48.6
GFC (ours)97.7100.0100.076.869.250.4
+ +Table 1: Experimental results of Hits@1 on MetaQA, WebQSP and CompWebQ and F1 on WebQSP. + +reasoning paths for the student network. + +- LSRL (Yan et al., 2021) proposes three relation learning tasks for BERT-based KBQA, including relation extraction, relation matching, and relation reasoning. +- TransferNet (Shi et al., 2021) proposes an effective and transparent framework, which supports both label and text relations. + +# 4.3 Experimental Settings + +In order to intuitively reflect the ability of our model in multi-hop questions, we label each question on WebQSP with the number of hops according to the reasoning chains in the original data. Only about 20 questions have no reasoning chains. So we manually label the missing reasoning chains. The label information is only used when evaluating. + +For the experiments of WebQSP and CompWebQ, we use the uncased base version of pretrained BERT (Devlin et al., 2019) as the question encoder. We download the bert-base-uncased model from HuggingFace $^{1}$ . We set the hop sizes $T = 2$ for WebQSP and CompWebQ dataset. For the experiments of MetaQA, we use bi-directional GRU (Chung et al., 2014) as the question encoder and set the hop size $T = 3$ . + +Our model is trained using RAdam (Liu et al., 2020) optimizer with a learning rate of $1e^{-3}$ . We use a scheduler that decreases linearly after increasing from 0 to $1e^{-3}$ during a linear warmup period. For BERT, we use a smaller learning rate $3e^{-5}$ . The mini-batch size on WebQSP is set to 16, on CompWebQ is 64 and on MetaQA is 128. Besides Hits@1, we also use the average question-wise $F_{1}$ score as our evaluation metrics. We trained our + +model on a single GPU of Tesla P40, which took about 16 hours for WebQSP, 40 hours for CompWebQ and 6 hours for MetaQA. + +# 4.4 Main Results + +Table 1 compares different models on three benchmarks. As we can see, GFC performs pretty much the same way as state-of-the-art model TransferNet of MetaQA. GFC performs perfectly in the 2-hop and 3-hop questions on MetaQA. As for the 1-hop questions of MetaQA, GFC achieves $97.7\%$ which surpasses TransferNet and EmbedKGQA. The reason why the performance on 1-hop is worse than 2-hop and 3-hop is that more relation constraints can alleviate the noise of the dataset itself. + +WebQSP is more complex than MetaQA, because it has much more relations and triplets but much less training samples. Specially, GFC gets $76.8\%$ on WebQSP dataset for Hits@1, which achieves new state-of-the-art performance. Our path-based method beats the most effective embedding-based method EMQL $(75.5\%)$ . In other words, our path-based method not only has better interpretability and extensibility, but also has better performance. GFC also achieves very competitive result $69.2\%$ for F1. On CompWebQ dataset, we compare the results with Shi et al. (2021) and Sun et al. (2019) on the dev set. GFC achieves $50.4\%$ for Hits@1, which performs better than TransferNet $(48.6\%)$ and PullNet $(47.2\%)$ . + +# 4.5 Ability to model sequential logic + +To verify GFC can model the sequential logic in the whole multi-hop process effectively, we compare Hits@1 of 1-hop and 2-hop questions respectively between GFC and the strong path-based model TransferNet (Shi et al., 2021) based on the hop labels of WebQSP. + +
ModelWebQSP
1-hop2-hop
TransferNet79.458.7
GFC (ours)81.668.4
+ +Table 2 shows that Hits@1 of 1-hop and 2-hop questions increase by $2.2\%$ and $9.7\%$ respectively. The fact that GFC performs much better in 2-hop questions proves the effectiveness of our proposed GRU-inspired flow control framework. The whole framework can consider what has already been focused on and alleviate some illogical reasoning (Please refer Figure 1). + +# 4.6 Reasoning ability over incomplete KG + +In real application scenarios, knowledge graph (KG) is usually incomplete, which requires models to have stronger reasoning ability and robustness. In general, there are several similar paths from the topic entities to the answers entities. But some paths are incorrect even if they can lead to the right answers. As shown in Figure 6, there are two paths from the topic entity George VI to the answer Queen Elizabeth. But the path above is wrong, because the relations people.person的孩子 and people.person.parents are not correct for the specific question What is the name of king george vi wife. Some of these paths will disappear when KB is incomplete. In this case, we must follow the right paths to get the right answers, which requires stronger reasoning ability and robustness of models. For embedding-based methods, they will get worse embeddings of entities and relations because the number of triplets for training KG embeddings becomes much less. We compare GFC with other + +Table 2: Hits@1 comparison of 1-hop and 2-hop questions on WebQSP between GFC and TransferNet. + +
ModelWebQSPWebQSP KG-50
KVMemNN46.732.7
GRAFT-Net67.848.2
PullNet68.150.3
EmbedKGQA66.653.2
TransferNet71.452.4
LSRL72.958.8
GFC (ours)76.859.5
+ +Table 3: The performance comparison of Hits@1 with the full KG and the $50\%$ KG on WebQSP. + +competitive methods on the incomplete WebQSP with half KG preprocessed by EmbedKGQA (Saxena et al., 2020). The results in Table 3 show that GFC achieves $59.5\%$ for Hits@1 and performs much better than EmbedKGQA $(53.2\%)$ , which aims to handle the multi-hop KBQA on incomplete KG specially. GFC also surpasses the strong path-based method TransferNet by a large margin, which proves our method has stronger reasoning ability. In particularly, GFC surpasses previous state-of-the-art LSRL $(58.8\%)$ , while keeping simple without additional pre-trained tasks. + +# 4.7 Impact of hop size + +![](images/bcb7d3a48480ada0f7aa3eedadd4d78d868c473cf56dfa3f79870a98ae30d171.jpg) +Figure 5: Results when setting different hop sizes + +Hop size is a crucial hyperparameter. To investigate its impact, we compare the performance of GFC and TransferNet when choosing different hop sizes. As shown in Figure 5, the performance of both models decreases to varying degrees when the hop size increases. Most questions on WebQSP need no more than 2-hop reasoning. Excessive hop sizes will introduce additional noise. But compared to TransferNet, GFC has a more stable performance among different hops. As hop size increases, the gap between two models gradually widens. + +# 4.8 Ablation Study + +We remove or replace model components and report the performance on WebQSP and CompWebQ datasets in Table 4. In (a), we remove the SGU. In (b), we replace $H^0$ with $H^{t-1}$ of Eq. 10 to evaluate the importance of the initial question semantics. In (c), we remove past reasoning information, which can be viewed as a part ablation experiment of GRU-inspired information flow control framework. But sequential reasoning self-attention mechanism and some tightly connected modules still remains. + +As is shown in Table 4, taking WebQSP as an example, the past reasoning information is the most + +![](images/1fe5d82ccf292a9e63a1ced92d2771cecbd20a94afb3d626cfec0182ac9cdeb3.jpg) +Figure 6: The crucial part of reasoning process of one example from WebQSP. We start from the topic entity George VI. In the 1st hop, GFC gives relation people.person.spouse_s the highest score 0.947. There is no path from George VI with relation peopleappoint.apointed_by. Thus, this path will be broken. This is one of advantages of our method which can use rich knowledge graph topology information to filter out irrelevant relations and entities. The final score of Queen Elizabeth is the sum of two paths. The final answers are selected by the multi-hop attention. We restrict the score in [0,1] for training model easily. + +
ModelWebQSPCWQ
GFC-full(ours)76.850.4
(a) w/o SGU75.349.6
(b) w/o initial semantics76.149.9
(c) w/o past information75.149.3
+ +Table 4: Ablation study on WebQSP and CompWebQ (CWQ). + +critical to the performance (1.7% drop), which proves past reasoning information can help current decision. In (b), the performance reduces about 0.7%, which indicates update upon the initial question representation can alleviate the large semantic deviation indeed. In (a), the SGU accounts for 1.5% performance drop respectively, which proves the effectiveness of the SGU in refining reasoning information and alleviate introducing the noise of past reasoning information. + +# 4.9 Error Analysis + +Figure 6 shows the reasoning process of one correct example of our model GFC. In addition, we explore frequently observed error cases where the proposed model fails to produce correct answers. The first type of error is that questions are tokenized incorrectly by BERT tokenizer, such as what highs ##cho ##ol did harper lee go to and when's the last time the steelers won the superb ##ow ##l. BERT tokenizes the crucial topic entity incorrectly, which + +causes our model unable to recognize the correct relations in current hop. A simple and easy way to think of is to add these wrong tokenized entities into the vocabulary. But in this way, the pretrained word embeddings of BERT cannot be used. We try to learn these words from scratch, but get worse results because there is no enough training samples for these entities. The second type error is because of relation confusion. Many relations have very similar meanings, such as tv.tvguest_role actor and tv.regular_tv Appearance actor. GFC cannot distinguish them clearly, because the number of samples related with them is so small. + +# 5 Conclusions + +In this paper, we design (i) a sequential reasoning self-attention mechanism to extract the crucial reasoning information of each single hop in a more fine-grained way and (ii) a GRU-inspired flow control framework to model sequential logic in the whole multi-hop process more effectively. Experimental results show the superior performance of GFC. Specially, GFC achieves new state-of-the-art Hits@1 performance on WebQSP. GFC also shows its high effectiveness when KB is incomplete. As a path-based method, GFC not only has better interpretability and extensibility, but also has better performance. In future work, we plan to investigate further on how to model the multi-hop reasoning process using the structures of language models. + +# Limitations + +Although our method achieves surprising performance in the multi-hop KBQA task, there are still some limitations to be improved. The limitation of our study are summarized as follows: + +1) The optimal hop size in our model depends on experimental results. On one hand, the performance of GFC are not stable enough when the hop size increases (shown in Figure 5). On the other hand, the hop size required to reason is different for complex questions in real application scenarios. Reasoning with the same hop size for all questions will greatly increase the computational cost and introduce unnecessary noise. Thus how to determine the optimal hop size for each question adaptively still remains a key challenge for multi-hop KBQA task. +2) As discussed in error analysis, some relations have very similar meanings but with few training samples. Our model does not work well with these relations. +3) Our model can only receive feedback from final answers. How to provide more supervision signals from the perspective of model design will be an interesting exploration direction. + +# Ethics Statement + +We worked within the purview of acceptable privacy practices and strictly followed the data usage policy. In all the experiments, we use public datasets according to their intended usage. We have also described our experimental settings in detail to ensure the reproducibility of our work. We neither introduce any social/ethical bias to the model nor amplify any bias in the data, so we do not foresee any direct social consequences or ethical issues. + +# Acknowledgments + +This work is supported in part by Natural Science Foundation of China (grant No.62276188 and No.61876129), the Beijing Academy of Artificial Intelligence(BAAI), TJU-Wenge joint laboratory funding, and MindSpore 2. + +# References + +Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data, pages 1247-1250. +Antoine Bordes, Nicolas Usunier, Alberto García-Durán, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States, pages 2787-2795. +Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724-1734, Doha, Qatar. Association for Computational Linguistics. +Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555. +William W. Cohen, Haitian Sun, R. Alex Hofer, and Matthew Siegler. 2020. Scalable neural methods for reasoning with a symbolic knowledge base. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. +Rajarshi Das, Shehzaad Dhuliawala, Manzil Zaheer, Luke Vilnis, Ishan Durugkar, Akshay Krishnamurthy, Alex Smola, and Andrew McCallum. 2018. Go for a walk and arrive at the answer: Reasoning over paths in knowledge bases using reinforcement learning. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. +Li Dong, Furu Wei, Ming Zhou, and Ke Xu. 2015. Question answering over Freebase with multi-column convolutional neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint + +Conference on Natural Language Processing (Volume 1: Long Papers), pages 260-269, Beijing, China. Association for Computational Linguistics. +Jiale Han, Bo Cheng, and Xu Wang. 2020. Two-phase hypergraph based reasoning with dynamic relations for multi-hop KBQA. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI 2020, pages 3615-3621. ijcai.org. +Yanchao Hao, Yuanzhe Zhang, Kang Liu, Shizhu He, Zhanyi Liu, Hua Wu, and Jun Zhao. 2017. An end-to-end model for question answering over knowledge base with cross-attention combining global knowledge. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 221-231, Vancouver, Canada. Association for Computational Linguistics. +Gaole He, Yunshi Lan, Jing Jiang, Wayne Xin Zhao, and Ji-Rong Wen. 2021. Improving multi-hop knowledge base question answering by learning intermediate supervision signals. In Proceedings of the 14th ACM International Conference on Web Search and Data Mining, pages 553-561. +Xanh Ho, Anh-Khoa Duong Nguyen, Saku Sugawara, and Akiko Aizawa. 2020. Constructing a multihop QA dataset for comprehensive evaluation of reasoning steps. In Proceedings of the 28th International Conference on Computational Linguistics, pages 6609-6625, Barcelona, Spain (Online). International Committee on Computational Linguistics. +Thomas N. Kipf and Max Welling. 2017. Semi-supervised classification with graph convolutional networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulouse, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net. +Yunshi Lan and Jing Jiang. 2020. Query graph generation for answering multi-hop complex questions from knowledge bases. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 969-974, Online. Association for Computational Linguistics. +Liyuan Liu, Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, and Jiawei Han. 2020. On the variance of the adaptive learning rate and beyond. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. +Kangqi Luo, Fengli Lin, Xusheng Luo, and Kenny Zhu. 2018. Knowledge base question answering via encoding of complex query graphs. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2185-2194, Brussels, Belgium. Association for Computational Linguistics. +Alexander Miller, Adam Fisch, Jesse Dodge, Amir-Hossein Karimi, Antoine Bordes, and Jason Weston. + +2016. Key-value memory networks for directly reading documents. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1400-1409, Austin, Texas. Association for Computational Linguistics. +Yunqi Qiu, Yuanzhuo Wang, Xiaolong Jin, and Kun Zhang. 2020. Stepwise reasoning for multi-relation question answering over knowledge graph with weak supervision. In WSDM '20: The Thirteenth ACM International Conference on Web Search and Data Mining, Houston, TX, USA, February 3-7, 2020, pages 474-482. ACM. +Apoory Saxena, Aditay Tripathi, and Partha Talukdar. 2020. Improving multi-hop question answering over knowledge graphs using knowledge base embeddings. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4498-4507, Online. Association for Computational Linguistics. +Priyanka Sen, Armin Oliya, and Amir Saffari. 2021. Expanding end-to-end question answering on differentiable knowledge graphs with intersection. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 8805-8812, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. +Jiaxin Shi, Shulin Cao, Lei Hou, Juanzi Li, and Hanwang Zhang. 2021. TransferNet: An effective and transparent framework for multi-hop question answering over relation graph. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 4149-4158, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. +Jiaxin Shi, Shulin Cao, Liangming Pan, Yutong Xiang, Lei Hou, Juanzi Li, Hanwang Zhang, and Bin He. 2020. Kqa pro: A large-scale dataset with interpretable programs and accurate sparqls for complex question answering over knowledge base. ArXiv preprint, abs/2007.03875. +Haitian Sun, Andrew Arnold, Tania Bedrax Weiss, Fernando Pereira, and William W Cohen. 2020. Faithful embeddings for knowledge base queries. In Advances in Neural Information Processing Systems, volume 33, pages 22505-22516. Curran Associates, Inc. +Haitian Sun, Tania Bedrax-Weiss, and William Cohen. 2019. PullNet: Open domain question answering with iterative retrieval on knowledge bases and text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2380-2390, Hong Kong, China. Association for Computational Linguistics. +Haitian Sun, Bhuwan Dhingra, Manzil Zaheer, Kathryn Mazaitis, Ruslan Salakhutdinov, and William Cohen. + +2018. Open domain question answering using early fusion of knowledge bases and text. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4231-4242, Brussels, Belgium. Association for Computational Linguistics. +Alon Talmor and Jonathan Berant. 2018. The web as a knowledge-base for answering complex questions. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 641-651, New Orleans, Louisiana. Association for Computational Linguistics. +Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, volume 48 of JMLR Workshop and Conference Proceedings, pages 2071-2080. JMLR.org. +Kun Xu, Yuxuan Lai, Yansong Feng, and Zhiguo Wang. 2019. Enhancing key-value memory neural networks for knowledge based question answering. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2937-2947, Minneapolis, Minnesota. Association for Computational Linguistics. +Yuanmeng Yan, Rumei Li, Sirui Wang, Hongzhi Zhang, Zan Daoguang, Fuzheng Zhang, Wei Wu, and Weiran Xu. 2021. Large-scale relation learning for question answering over knowledge bases with pre-trained language models. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3653-3660. +Wen-tau Yih, Ming-Wei Chang, Xiaodong He, and Jianfeng Gao. 2015. Semantic parsing via staged query graph generation: Question answering with knowledge base. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1321-1331, Beijing, China. Association for Computational Linguistics. +Wen-tau Yih, Matthew Richardson, Chris Meek, Ming-Wei Chang, and Jina Suh. 2016. The value of semantic parse labeling for knowledge base question answering. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 201-206, Berlin, Germany. Association for Computational Linguistics. +Yuyu Zhang, Hanjun Dai, Zornitsa Kozareva, Alexander J. Smola, and Le Song. 2018. Variational reasoning for question answering with knowledge graph. In + +Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 6069-6076. AAAI Press. +Mantong Zhou, Minlie Huang, and Xiaoyan Zhu. 2018. An interpretable reasoning network for multi-relation question answering. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2010-2022, Santa Fe, New Mexico, USA. Association for Computational Linguistics. \ No newline at end of file diff --git a/asequentialflowcontrolframeworkformultihopknowledgebasequestionanswering/images.zip b/asequentialflowcontrolframeworkformultihopknowledgebasequestionanswering/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..e038b9a4c0bce5d897af3c38cb69d8eccd9313be --- /dev/null +++ b/asequentialflowcontrolframeworkformultihopknowledgebasequestionanswering/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a3be29a6a09f656ea2c7572cce907ac19b74e0e43c1891bd31db90c991665592 +size 433961 diff --git a/asequentialflowcontrolframeworkformultihopknowledgebasequestionanswering/layout.json b/asequentialflowcontrolframeworkformultihopknowledgebasequestionanswering/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..4d44c36c936bcdc97ee55d5905fe70777bd2278f --- /dev/null +++ b/asequentialflowcontrolframeworkformultihopknowledgebasequestionanswering/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2e77a951b0df4c3e9ff66d8f8b9b850baa52bc637d535ae977284c1538ca556f +size 380809 diff --git a/asimplecontrastivelearningframeworkforinteractiveargumentpairidentificationviaargumentcontextextraction/fef3dd23-51c2-4807-ab78-5eecc797a572_content_list.json b/asimplecontrastivelearningframeworkforinteractiveargumentpairidentificationviaargumentcontextextraction/fef3dd23-51c2-4807-ab78-5eecc797a572_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..7831a38585b166e10ac26c17cdbb816e3b2801b4 --- /dev/null +++ b/asimplecontrastivelearningframeworkforinteractiveargumentpairidentificationviaargumentcontextextraction/fef3dd23-51c2-4807-ab78-5eecc797a572_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c659202359778ee6ef659e6399bc3c93114b5dc4053de805715f6e84bccf6138 +size 86758 diff --git a/asimplecontrastivelearningframeworkforinteractiveargumentpairidentificationviaargumentcontextextraction/fef3dd23-51c2-4807-ab78-5eecc797a572_model.json b/asimplecontrastivelearningframeworkforinteractiveargumentpairidentificationviaargumentcontextextraction/fef3dd23-51c2-4807-ab78-5eecc797a572_model.json new file mode 100644 index 0000000000000000000000000000000000000000..70dc82edbc74e04e8e69af049c63bed1c65443f1 --- /dev/null +++ b/asimplecontrastivelearningframeworkforinteractiveargumentpairidentificationviaargumentcontextextraction/fef3dd23-51c2-4807-ab78-5eecc797a572_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ec20cac2291bc48a2ce01b55af7afea6852694102f22c84c108c6d791b941efa +size 107257 diff --git a/asimplecontrastivelearningframeworkforinteractiveargumentpairidentificationviaargumentcontextextraction/fef3dd23-51c2-4807-ab78-5eecc797a572_origin.pdf b/asimplecontrastivelearningframeworkforinteractiveargumentpairidentificationviaargumentcontextextraction/fef3dd23-51c2-4807-ab78-5eecc797a572_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..10dc540fac66004df97c83b9e777d1d7062fa4d1 --- /dev/null +++ b/asimplecontrastivelearningframeworkforinteractiveargumentpairidentificationviaargumentcontextextraction/fef3dd23-51c2-4807-ab78-5eecc797a572_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:58e2b52ada0fceb8671c7c778c6ceabbac4d6a1a36b8e2d3568e5e387650d571 +size 1055544 diff --git a/asimplecontrastivelearningframeworkforinteractiveargumentpairidentificationviaargumentcontextextraction/full.md b/asimplecontrastivelearningframeworkforinteractiveargumentpairidentificationviaargumentcontextextraction/full.md new file mode 100644 index 0000000000000000000000000000000000000000..9780c65261316221ac12071c61ec77c03e46414d --- /dev/null +++ b/asimplecontrastivelearningframeworkforinteractiveargumentpairidentificationviaargumentcontextextraction/full.md @@ -0,0 +1,406 @@ +# A Simple Contrastive Learning Framework for Interactive Argument Pair Identification via Argument-Context Extraction + +Lida Shi $^{1}$ , Fausto Giunchiglia $^{1,2,3}$ , Rui Song $^{1}$ , Daqian Shi $^{3}$ , Tongtong Liu $^{2}$ + +Xiaolei Dao $^{3}$ , Hao Xu $^{1,2,*}$ + +$^{1}$ School of Artificial Intelligence, Jilin University + +$^{2}$ College of Computer Science and Technology, Jilin University + +$^{3}$ DISI, University of Trento + +{shild21, songrui20, liutt20}@ mails.jlu.edu.cn, xuhao@jlu.edu.cn + +{fausto.giunchiglia, daqian.shi, xiaolei.diao}@unitn.it + +# Abstract + +Interactive argument pair identification is an emerging research task for argument mining, aiming to identify whether two arguments are interactively related. It is pointed out that the context of the argument is essential to improve identification performance. However, current context-based methods achieve limited improvements since the entire context typically contains much irrelevant information. In this paper, we propose a simple contrastive learning framework to solve this problem by extracting valuable information from the context. This framework can construct hard argument-context samples and obtain a robust and uniform representation by introducing contrastive learning. We also propose an argument-context extraction module to enhance information extraction by discarding irrelevant blocks. The experimental results show that our method achieves the state-of-the-art performance on the benchmark dataset. Further analysis demonstrates the effectiveness of our proposed modules and visually displays more compact semantic representations. The code is available at GitHub1. + +# 1 Introduction + +Computational argumentation, as a branch of natural language understanding, has become a new research field. Existing work can be divided into two categories (Asterhan and Schwarz, 2007): monological argumentation and dialogical argumentation. Monological argumentation is the scenario for one participant, such as RCT (Mayer et al., 2020), student essays (Stab and Gurevych, 2014) and user comments (Niculae et al., 2017). The researchers focus on topics like argumentation (argument) mining (Galassi et al., 2018; Morio et al., + +![](images/1f76c0d7a5403339249df0fed87d4b10748df8308ff053146026386fa0f2e682.jpg) +Quotation and its context + +![](images/c702c75c0e43bd9cd246a9fb630e43379009f8a7a4078e076343604cb133de0c.jpg) +··· + +![](images/cc5a41ebc7b0b7740bec82e3fde8685b00963090d9d2eb47b9ae89a688dd7fa4.jpg) +··· + +![](images/2a6e9499c07aff4ef4a9240c8f11a8b3484df1e8e488e2e93d3253d582b57d26.jpg) +Positive reply and its context +Negative reply2 and its context +Negative reply1 and its context +Figure 1: An instance in the dataset. Each instance includes six arguments: a quotation and its corresponding five candidate replies. Additionally, the task provides contextual information for each argument. For this task, the model needs to identify whether the quotation and the reply are interactively related. Only one of the five candidate replies is correct. The arguments are represented in green font, and the context is expressed in black. + +2020; Jo et al., 2019; Ruiz-Dolz et al., 2021), argument assessment (Anne et al., 2020; Skitalinskaya et al., 2021), and argument reasoning (Botschen et al., 2018; Habernal et al., 2018; Ruiz-Dolz et al., 2021) for this sort of study. Recently, researchers have been paying great attention to dialogical argumentation, since online forums have become the primary medium for argumentation and discussion. + +People can express themselves on the network + +![](images/57f253f5b8c5d931b15b9c5088760794bfaf1d287ce951659f0893f0599822b3.jpg) +(a) + +![](images/85985b6c2463bdb7d2f96b4c7892affa963e5100b8d41c0d455cafab680d4a9b.jpg) +(b) + +![](images/e87ee32ba775e16337531b677dae4630c2cb9a8776e33b3895756987c9debcd0.jpg) +(c) + +![](images/ea6ca3e585c20622afe6b4e86119f6dd49eca14a2e4c2f2ad0bd5a6668fdb478.jpg) +(d) +Figure 2: Four heatmaps of the semantic similarity between the argument and each sentence in its context. The horizontal coordinate is the number of sentences, and the vertical coordinate is the range of semantic similarity. (a) quotation-context (b) positive relpy-context (c) negative relpy1-context (d) negative relpy2-context. + +anywhere and at any time, thanks to the widespread use of the Internet and communication technologies. Indeed, different people have diverse arguments on a subject, and argumentation is the most effective way to interchange arguments. Many online forums, such as ChangemyView $^{2}$ and idebate $^{3}$ , provide a venue for free online argumentation, allowing users to argue with others regardless of time or location. Therefore, the study of argumentation in the interactive text arises. Earlier research (Wei et al., 2016; Tan et al., 2016) uses the data from the ChangemyView forum to focus on the key elements of persuasion arguments. Then, (Lu et al., 2021) formulates an interesting and meaningful task to identify whether two arguments are interactively related. More interestingly, (Yuan et al., 2021a) have applied the task to the legal field to help the court to pinpoint the focus of the case by analyzing the arguments of both sides in the trial transcript and allow the judge to make a fair decision. Figure 1 demonstrates the details of this task. + +Obviously, it is difficult to identify the interactive relationship by two arguments because most arguments contain only a few words. Moreover, contextual information is related to the meaning of the quotation and reply. Thus, it is essential to utilize contexts. (Lu et al., 2021) propose a hierarchical RNN network to model context. (Yuan et al., + +2021b) constructs the argumentation knowledge graph to extract entity information from the context. However, the current context-based methods achieve limited improvement since the entire context normally contains a large amount of irrelevant information. Figure 2 shows the heatmaps of the semantic similarity between the argument and each sentence in its context for the instance of Figure 1. Remarkably, many sentences in the context have very low semantic similarity to the argument, and some are even close to 0. Intuitively, as shown in Figure 1, the quotation talks about “terrorists deserve justice in court” while its context mentions “religion”, “drones”, and other irrelevant information. “drones” can also be found in the contexts of the two negative replies. If the whole context is modeled, the model is likely to infer the interactive relationship between quotation and negative reply2. Undoubtedly, these irrelevant sentences are noisy data for this task, negatively affecting the model training. + +In this paper, we propose a simple contrastive learning framework to enhance the robustness of the model under noise conditions and reduce the adverse effects on model. This framework can construct hard argument-context samples by randomly extracting the context blocks. We combine the cross-entropy and the supervised contrastive loss to improve the expressiveness of the representations. In addition, we propose an argument-context extraction (ACE) module to enhance context information extraction. In this module, we can obtain the semantic similarity of argument and context blocks and further extract the context blocks with high similarity as the model's input. Through empirical analysis, we observe that our model performs better with the benchmark dataset and noisy dataset, which proves the superiority of our method. Our main contributions can be summarized as follows: + +- We propose a simple contrastive learning framework to obtain robust and uniform semantic representation. +- We design an argument-context extraction (ACE) module to enhance information extraction by discarding irrelevant blocks. +- The experimental results show that our method achieves the state-of-the-art performance on the benchmark dataset. Further analysis demonstrates the effectiveness of our + +proposed modules and visually displays more compact semantic representations. + +# 2 Related Work + +# 2.1 Argumentation Mining + +Argumentation (argument) mining aims to identify writing structures (such as claims, evidence, and statements) and detect the existing relations from the texts (Lytos et al., 2019; Lawrence and Reed, 2020). A lot of methods have been proposed in previous studies such as BiLstm (Eger et al., 2017), multi-task learning (Galassi et al., 2021, 2018), attentive residual networks (Galassi et al., 2021), unsupervised knowledge (Dutta et al., 2022), transformer-based model (Ruiz-Dolz et al., 2021; Mayer et al., 2020), and cascade model (Jo et al., 2019). In addition, many researchers have applied the task to many scenarios such as healthcare (Mayer et al., 2020), education (Stab and Gurevych, 2014; Alhindi and Ghosh, 2021), peer reviews(Niculae et al., 2017). + +Different from monological argumentation mentioned above, an increasing number of academics begin to conduct studies on dialogical argumentation. (Ji et al., 2018) investigates the issue of persuasiveness evaluation for argumentative comments. (Cheng et al., 2020) introduces a new argument pair extraction task on peer review and rebuttal to study the contents, structures and connections between them. Similarly, (Lu et al., 2021) propose the task of identifying the interactive argument pair in online debate forum. Subsequently, (Yuan et al., 2021b) leverages a knowledge graph (Khatib et al., 2020) to model the contextual information and encodes the entity and path in the context to obtain entity embedding and path representation. + +# 2.2 Contrastive Learning in NLP + +Contrastive learning (CL) has gained tremendous attention in the natural language processing (NLP) field. The main idea is to train a representation layer by pulling closer representations of the positive samples and separating them from negative ones. Contrastive learning can be divided into self-supervised contrastive learning and supervised contrastive learning. Positive and negative samples have different definitions in different scenarios. In self-supervised contrastive learning, (Fang et al., 2020) propose a pre-trained language representation model (CERT) using contrastive learning at the sentence level to facilitate the language under + +standing tasks. (Gao et al., 2021) propose a simple sample augmentation strategy by just adjusting dropout masks in contrastive learning framework and advances the state-of-the-art sentence embeddings. In supervised contrastive learning, (Gao et al., 2021) incorporates annotated pairs from natural language inference datasets into the contrastive learning framework, by using "entailment" pairs as positives and "contradiction" pairs as hard negatives. Inspired by (Khosla et al., 2020), (Gunel et al., 2020) propose a new supervised contrastive loss(SCL). Combined with cross-entropy, the new SCL loss obtains significant improvements on multiple datasets of the GLUE benchmark in few-shot learning settings. + +# 3 Method + +# 3.1 Task Definition + +Figure 1 demonstrates the details of this task. This task contains two kinds of arguments: quotation and reply. For a quotation $q$ and its context $c_{q}$ , it has five candidate replies $\{r_{i}\}_{i=1}^{5}$ with their corresponding contexts $\{c_{r_{i}}\}_{i=1}^{5}$ . The model needs to identify whether the quotation and the reply are interactively related. Only one of the five candidate replies is correct. $\arg$ is the general term for $q$ and $r$ . Previous research (Yuan et al., 2021b; Lu et al., 2021) treats the task as a sentence pair ranking problem. In this paper, we treat the task as a binary classification problem. If two arguments are interactively related, the label is 1. Otherwise, the label is 0. + +# 3.2 Argument-context Extraction Module + +In this paper, we introduce the idea of information retrieval to discard irrelevant information in the context. Inspired by (Li and Gaussier, 2021; Li et al., 2021), the argument-context extraction module is based on three main steps: (1) Context block segmentation (2) Argument-context similarity calculation (3) Context block selection. The structure is shown in the Figure 3. The following describes each step in detail. + +# 3.2.1 Context Block Segmentation + +Here, we adopt the dynamic programming method (Ding et al., 2020) to segment context into blocks. The main idea of the method is to segment a document into multiple blocks by punctuation, and the block size is a hyperparameter (denoted as $\alpha$ in this paper). It sets different costs for different + +![](images/667b86f13ab6e834bee6079c8848a82f1176598319d0827df126abdcef22dcd4.jpg) +Figure 3: An illustration of argument-context extraction module. + +punctuation marks to segment in priority on strong punctuation marks such as “:”, “?” and “!”. This process may damage the coherence of the whole context, but we consider that some redundant context blocks will be detrimental to classification. In other words, a few key blocks in the context store sufficient and necessary information to fulfill this task, which is why the redundant context should be removed. The algorithm are showed in Appendix A. + +# 3.2.2 Argument-context Similarity Calculation + +After the block segmentation module, $c_{arg}$ is segmented into $N$ blocks. Next, we evaluate the semantic relevance between each block and $arg$ by calculating the cosine similarity of its embedding. The equation are showed as follows: + +$$ +\operatorname {S i m} \left(\arg , c _ {\arg}\right) = \left[ \begin{array}{c} \operatorname {s i m} \left(h _ {\arg}, h _ {b l c o k 1}\right) \\ \operatorname {s i m} \left(h _ {\arg}, h _ {b l c o k 2}\right) \\ \dots \\ \operatorname {s i m} \left(h _ {\arg}, h _ {b l c o k N - 1}\right) \\ \operatorname {s i m} \left(h _ {\arg}, h _ {b l c o k N}\right) \end{array} \right] \tag {1} +$$ + +where $h = BERT_{\theta}(x)$ is the sentence embedding. In this work, we use the BERT pre-trained by (Gao et al., 2021) for encoding sentences into embeddings. sim $(h_1, h_2)$ is the cosine similarity $\frac{h_1^T h_2}{\| h_1 \| \| h_2 \|}$ . Sim $(arg, c_{arg})$ is the similarity vector between arg and the context block. + +# 3.2.3 Context Block Extraction + +For this task, we propose a new input form of BERT to combine $arg$ and its context. According to the + +above steps, the most relevant context blocks to $arg$ is obtained by ranking the Sim $(\arg ,c_{arg})$ . Next, the most relevant blocks are concatenated together (in their order of appearance in the context) and with the arg. Finally, we use [SEP] to separate the quotation part from the reply part. The equation are showed as follows: + +$$ +c _ {q} ^ {b} = c _ {q} ^ {b 1}, c _ {q} ^ {b 2}, \dots c _ {q} ^ {b n} \tag {2} +$$ + +$$ +c _ {r} ^ {b} = c _ {r} ^ {b 1}, c _ {r} ^ {b 2}, \dots c _ {r} ^ {b n} \tag {3} +$$ + +$$ +z = \left[ C L S \right] q, c _ {q} ^ {b} [ S E P ] r, c _ {r} ^ {b} [ S E P ] \tag {4} +$$ + +where $z$ is the input of the BERT. $c_{q}^{b}$ is the top $n$ ( $n \leq N$ ) blocks that are most similar to $q$ . Similarly, $c_{r}^{b}$ is the top $n$ blocks that are most similar to $r$ . Note that the number $n$ of selected blocks depends on the capacity of BERT and block size $\alpha$ . The token length relationship is defined as follows: + +$$ +3 + L (q) + \sum_ {i = 1} ^ {n} L \left(c _ {q} ^ {b i}\right) + L (r) + \sum_ {i = 1} ^ {n} L \left(c _ {r} ^ {b i}\right) \leq 5 1 2 \tag {5} +$$ + +where $L(x)$ is the length of $x$ . To prevent information loss, we try to satisfy the above inequality when setting $n$ and $\alpha$ . If the input length is longer than 512, we use hard truncation to comply with the input limit of BERT. + +# 3.3 Contrastive Learning Framework + +Prior work (Gunel et al., 2020; Gao et al., 2021) has demonstrated that contrastive learning is effective for learning sentence embedding by pulling closer representations of the positive samples and separating them from negative ones. Inspired by this, we introduce the contrastive learning objective into argument pair recognition and propose a new hard sample construction method. The detailed architecture of contrastive learning for interactive argument pair identification is shown in Figure 4. + +# 3.3.1 Definition of Positive and Negative Samples + +In self-supervised contrastive learning, positive and negative samples construction is a fascinating question. Many works (Chuang et al., 2022; Wu et al., 2021) try to find excellent methods for constructing positive and negative samples. In supervised contrastive learning, the samples are labeled so that positive and negative examples can be easily obtained. For this task, we treat the task as a binary classification problem. If two arguments are interactively related, we define them as a positive + +![](images/3473f956265c2ec539aa2edce8f337b8fc352853d11735b6d721b9fe061002e2.jpg) +Figure 4: An illustration of our framework. Note that different colored blocks denote different values. + +sample and denote it by $z^{+}$ . Otherwise, we define them as a negative sample and denote it by $z^{-}$ . + +# 3.3.2 Hard Samples Construction + +We propose a hard sample construction method in order to enhance the robustness of the model under noise conditions. Our method is very simple. As shown in 3.2, different block sizes and block selection rules generate different blocks in the contextual segmentation module. We use different block sizes and block selection rules to construct hard samples. Specifically, we use the strategy of randomly selecting context blocks to construct hard samples. When using the random selection strategy, more irrelevant information is introduced. It will be more difficult for the model to identify the interactive relationship between two arguments than the input using a high similarity selection strategy. The equation is as follows: + +$$ +z _ {h a r d} = [ C L S ] q, c _ {q} ^ {b} [ S E P ] r, c _ {r} ^ {h b} [ S E P ] \tag {6} +$$ + +$$ +c _ {r} ^ {h b} = c _ {r} ^ {b 1}, c _ {r} ^ {b 2}, \dots c _ {r} ^ {b m} \tag {7} +$$ + +$$ +c _ {q} ^ {b} = c _ {q} ^ {b 1}, c _ {q} ^ {b 2}, \dots c _ {q} ^ {b n} \tag {8} +$$ + +where $z_{hard}$ denotes the constructed hard sample, using a different background block size and random block selection strategy compared to the original sample. $c_r^{hb}$ denotes the context of the reply for more irrelevant information. Note that $c_r^{hb}$ and $c_r^b$ are different, and $m \neq n$ . In practice, for each positive sample, we construct three hard samples corresponding to it. For each negative sample, we construct one hard sample. The hard samples construction is essentially a data augmentation method + +from the data perspective. On the one hand, it increases the complexity and diversity of the dataset. On the other hand, it alleviates the problem of unbalanced data distribution (previously 1:4, now 1:2). + +# 3.3.3 Training Objectives + +Our framework contains two training objectives: binary classification and contrastive learning. For binary classification, we use binary cross-entropy loss. For contrastive learning, we use a supervised contrastive learning paradigm. Specifically, we introduce a supervised contrastive learning loss (Gunel et al., 2020) formulated to push representations from the same class close and representations from different classes further apart. The loss function is defined as follows: + +$$ +\mathcal {L} _ {b c e} = - \frac {1}{N} \sum_ {i = 1} ^ {N} y _ {i} \log \hat {y} _ {i} + (1 - y _ {i}) \log \left(1 - \hat {y} _ {i}\right) \tag {9} +$$ + +$$ +\mathcal {L} _ {s c l} = - \frac {1}{N} \sum_ {i = 1} ^ {N} \frac {1}{N _ {y _ {i}} - 1} \sum_ {j = 1, i \neq j, y _ {i} = y _ {j}} ^ {N} \Phi \tag {10} +$$ + +$$ +\Phi \left(h _ {i}, h _ {j}\right) = \log \frac {\mathrm {e} ^ {s i m \left(h _ {i} , h _ {j}\right) / \tau}}{\sum_ {k = 1 , k \neq i} ^ {N} \mathrm {e} ^ {s i m \left(h _ {i} , h _ {k}\right) / \tau}} \tag {11} +$$ + +In $\mathcal{L}_{bce}$ , $y_{i}$ denotes the label of $i_{th}$ sample and $\hat{y}_i$ denotes the model output for the probability of $i_{th}$ sample. In $\mathcal{L}_{scl}$ , $N_{y_i}$ is the total number of samples in the mini-batch that have the same label as $y_{i}$ . $\tau$ is an adjustable scalar temperature hyperparameter that controls the separation of classes. + +# 3.3.4 Uncertainty Weighting + +We introduce the Uncertainty Weighting (UW) (Kendall et al., 2018) to learn the weights between contrastive learning and binary classification. It dynamically weights multiple loss functions by considering the homoscedastic uncertainty of each objective. Therefore, it can combine losses of different orders of magnitude. Specifically, it rewrites the joint loss function as the following weighted sum: + +$$ +\mathcal {L} _ {U W} \left(\mathcal {L} _ {1}, \mathcal {L} _ {2}\right) = \frac {1}{2 \sigma_ {1} ^ {2}} \mathcal {L} _ {1} + \frac {1}{2 \sigma_ {2} ^ {2}} \mathcal {L} _ {2} + \log \sigma_ {1} \sigma_ {2} \tag {12} +$$ + +where $\sigma$ denotes the model's observation noise parameter to capture how much noise we have in the outputs. It is a learnabel parameter. Specifically, $\sigma_{1}$ and $\sigma_{2}$ control the relative weights of the $\mathcal{L}_1$ and $\mathcal{L}_2$ , respectively. $\log \sigma_1\sigma_2$ is a regularization term to prevent $\sigma$ too large. For our task, we use uncertainty weighting to combine contrastive learning loss $\mathcal{L}_{scl}$ with binary classification loss $\mathcal{L}_{bce}$ as the overall loss: + +$$ +\mathcal {L} _ {\text {o v e r a l l}} = \mathcal {L} _ {U W} \left(\mathcal {L} _ {b c e}, \mathcal {L} _ {s c l}\right) \tag {13} +$$ + +In this paper, we can adaptively adjust the two objectives during the training process by the Uncertainty Weighting. + +# 4 Experiments + +# 4.1 Experiment Setup + +# 4.1.1 Experimental Dataset + +The dataset4 we use is constructed by (Lu et al., 2021). The data collection is built on the ChangemyView dataset (Tan et al., 2016). For this task, each instance includes the quotation, one positive reply, four negative replies and their contexts. The number of instances in training and testing set is 11565 and 1481, respectively. Similar to the previous (Lu et al., 2021; Yuan et al., 2021b), we randomly split $10\%$ of the training set as validation set. In our experiment, the number of instances in training set and validation set is 10408 and 1157, respectively. + +# 4.1.2 Implementation Details + +The output hidden of BERT dimensions are 768. Dropout is used as 0.1 to avoid overfitting. We use + +AdamW (Loshchilov and Hutter, 2018) as our optimizer and the weight decay set to $1 \times 10^{-8}$ . The max length of sequence is set to 512, and initial learning rate is set as $1 \times 10^{-4}$ . The model is trained on the training set for 5 epochs and batch size is 40. For the normal samples, the block size and number of block are set to 6 and 42. For the hard samples, there are three alternative options. The block size and number of block are set to 4, 5, 8 and 64, 56, 32. We implement our code using Pytorch (Paszke et al., 2019) and Huggingface Transformers (Wolf et al., 2020) libraries. The hyperparameter $\tau$ is set as 0.03. The experiments are conducted on an NVIDIA V100 32GB GPU. + +# 4.1.3 Models for Comparison + +In order to demonstrate the effectiveness and superiority of our method, we compare with many state-of-the-art methods. The main comparison methods are as follows: + +- BERT without Context (Devlin et al., 2019): This method fine-tunes the BERT for sentence pair classification. This method only utilizes the quotation and reply, and does not make use of their contextual information. The input form of BERT without context is $z = [CLS]q[SEP]r[SEP]$ . +- Hierarchical Context (Lu et al., 2021): This method designs a discrete variational autoencoders (DVAE) to extract the representation of quotation and replies. A hierarchical structure is proposed to obtain the representation of the context by BiGRU. Finally, it integrates quotation or replies representations and their contextual representations to obtain the final sentence encoding. +- Knowledge Graph and GCN (Yuan et al., 2021b): This method is very sophisticated and it is the stat-of-the-art method so far. Firstly, (Yuan et al., 2021b) constructs a dialogical argumentation knowledge graph. Then, it uses a path-based graph convolutional network to encode the concepts and the reasoning path between concepts from the contexts. Finally, it aligns the conceptual information with the semantic information obtained by BERT. + +# 4.2 Overall Performance + +Previous methods (Lu et al., 2021; Yuan et al., 2021b) treat the task as a sentence pair ranking + +
MethodP@1(%)MRR(%)
Random Guess2045.67
BiGRU51.5270.57
BiGRU+RNN Context55.9873.20
BiGRU+Hierarchical Context57.4673.72
VAE+Hierarchical Context58.6174.66
DVAE+Hierarchical Context61.1776.16
BERT61.8576.57
BERT+Hierarchical Context66.8578.51
BERT+Knowledge Graph+GCN+Context*68.7580.85
Ours82.1789.60
+ +problem. Precision at one $(\mathrm{P}@\mathrm{l})$ and mean reciprocal rank (MRR) are used as evaluation metrics. For comparison purposes, we also use them as metrics. For the calculation of MRR, we use the classification probabilities to produce ranks. The results are listed in Table 1. From the table, we can make the following observations: + +- The introduction of contextual information is crucial for this task. When adding contextual information, all methods are better than those before adding contextual information. Therefore, how to make better use of the contextual information is essential for this task. +- Compared with the state-of-the-art method, our method shows an amazing performance improvement. We observe that our method outperforms the state-of-the-art method by $13.42\%$ and $8.75\%$ in $\mathrm{P@1}$ and MRR, respectively. There are two main reasons: firstly, the argument-context extraction module (Section 3.2) can select the most important context blocks for the current quotation and reply, thus reducing the interference of redundant information to the model. It reconfirms the importance of making full use of contextual information. Secondly, the introduction of contrastive learning enables the model to learn more robust semantic embeddings, substantially improving the model's ability to discriminate argument pairs. In addition, compared to the previous complicated model (Lu et al., 2021; Yuan et al., 2021b), we only use the BERT, which is extremely elegant and reduces the number of parameters. + +Table 1: Experimental results of our method and other former methods on the test dataset, where the sign “*” represents the state-of-the-art method. + +
MethodP@1(%)MRR(%)
BERT-BCE(baseline)63.5477.82
+ ACE75.3585.34
+ CL80.0188.35
+ Hard82.1789.60
+ +Table 2: Ablation study on each module. "BERT-BCE" denotes the BERT trained by binary cross entropy loss. "ACE" denotes the argument-context extraction module module (Section 3.2). "CL" denotes the contrastive learning (Section 3.3). "Hard" denotes the contrastive learning with hard samples construction (Section 3.3.2). The best results are highlighted in bold. The same below. + +# 4.3 Ablation Study + +In this section, we investigate the quantitative impact of each module on the final performance. The results of the ablation study are shown in the Table 2. We use the BERT trained binary cross entropy loss as the baseline. Note it does not use the contextual information(detailed in section 4.1.3). After adding the argument-context extraction module, the experimental results show a remarkable improvement in both metrics. The model's performance improves by $11.81\%$ in $\mathrm{P@1}$ and $7.52\%$ in MRR, which directly demonstrates the effectiveness of the argument-context extraction module. In addition, it also shows that the context contains a lot of valuable information, which is essential for this task. The introduction of contrastive learning improves the performance by $4.66\%$ in $\mathrm{P@1}$ and $3.01\%$ in MRR. Obviously, the model can learn more robust semantic representations by adding the training objective of contrastive learning. Here, + +
MethodP@1(%)MRR(%)
Without context63.5477.82
Low similarity69.1481.40
Random73.6084.15
High similarity(ours)75.3585.34
+ +we use uncertainty weighting(detailed in section 3.3.4) to blend the contrastive loss and BCE loss by default. Finally, the model performance is also significantly improved by constructing more hard samples. Note that the hard sample construction is a changeable module, although the hard sample construction can promote the effect of contrastive learning. + +# 5 Further Analyses + +# 5.1 Analysis on ACE Module + +To further validate effectiveness of the argument-context extraction module, we explore the context block selection strategy, and the results are as shown in Table 3. We can make the following observations. Firstly, it again demonstrates the importance of contextual information and the effectiveness of the argument-context extraction module. Even using "low-similarity" blocks, our method achieves a significant performance improvement compared to baseline(5.6% in P@1). Secondly, using "low-similarity" blocks also achieves great results. We consider "low-similarity" blocks also have a lot of valuable information because some contexts only obtain a few blocks after segmentation. Many overlapping blocks in "high-similarity" and "low-similarity" blocks. Finally, compared with "Low similarity" and "Random", "High similarity" achieves significant improvement, which shows that context contains redundant information harmful to model identification. + +# 5.2 Analysis on Hard Samples Construction + +In section, we further explore the effect of the hard samples construction module and explain why it works. Further experimental results are shown in the table 4. The performance improvement can + +Table 3: Performance comparison under different block selection strategies. "Low similarity" denotes the selection of the blocks with a low similarity ranking among the candidate blocks. "Random" denotes the random selection. "High similarity (ours)" denotes the selection of the blocks with a high similarity ranking among the candidate blocks. The best results are highlighted in bold. + +
MethodP@1(%)MRR(%)
ACE75.3585.34
Hard without CL80.6288.75
Hard with CL82.1789.60
+ +Table 4: Further experimental results on hard samples construction. + +
MethodO(a)(b)(c)
BCE80.6274.6180.2178.19
BCE+CL82.1776.1081.1778.53
+ +Table 5: Results on noisy testing sets with varying kinds of noise. "O" denotes the original text. "(a),(b),(c)" denote the three kinds of noisy. We use P@1 as the metric. + +be ablated into two parts: (1) hard samples construction module without contrastive learning (2) hard samples construction module with contrastive learning. Without contrastive learning, constructing hard samples is comparable to a data augmentation strategy. The performance is also significantly improved compared to only ACE module. We consider two explanations for this phenomenon. On the one hand, adding hard samples to the original dataset increases the scale of the dataset, and thus the model achieves better performance. On the other hand, we construct many positive samples, which somewhat smooth the ratio of positive to negative samples (previously 1:4, now 1:2). With contrastive learning, the performance is further improved. It is because contrastive loss is a hardness-aware loss (Wang and Liu, 2021; Gunel et al., 2020). $\tau$ controls the strength of penalties on hard samples. Though experimental and empirical analysis (detailed in Appendix B.2), we set $\tau$ to 0.05. In this experimental condition, the model focuses more on hard samples, resulting in a more uniform representation and better performance. + +# 5.3 Robustness on Noisy Dataset + +To evaluate the robustness and stability, we add some noise in our testing set for experiments. We design three noises: (a) select low similarity context blocks instead of high similarity (b) apply augmentation randomly (swap, crop, delete) (c) simulate keyboard distance error. An example of constructing a noisy sample is shown in the table 6. In practice, we use the NLPAUG (Ma, 2019) library. In table 5, we report our results on noisy testing set with different kinds of noise. Obviously, consistent + +
Noisy methodText
Original texti am willing to bet that john boehner would have an easier time dealing with congress as president than joe biden would due to his constant interaction with it.
Augmentation randomlyam willing that john have an easier time dealing with congress as president than joe would due his interaction it.
Simulate keyboard distance erroram !Jllijg rhaR john have an easier time vWalinb S7th dpnnress as president rhwn joe 1ouKd due his interaction it.
+ +Table 6: An instance of constructing a noisy sample. + +![](images/e1959c48bde62d5dd536cb3f854e4c436cd9d42f715e891e5341ea4251408caa.jpg) +Figure 5: t-SNE plots of the learned CLS embeddings on the testing set. Left: BCE; Right: $\mathrm{BCE + CL}$ ; Violet: negative examples; Yellow: positive examples. + +improvements over the CL with BCE+CL across all noise kinds, which shows that our method leads to models that are more robust to different kinds of noise in the testing data. + +# 5.4 Visualization + +In figure 5, we show t-SNE (Van der Maaten and Hinton, 2008) plots of the learned representations of the CLS embeddings on testing set. We can clearly observe that the BCE+CL term enforces a more compact clustering of examples with the same label, while the distribution of the embeddings learned with BCE is not compact. It shows that we obtain a robust and uniform representation by introducing contrastive learning. + +# 6 Conclusion and Future Work + +This paper proposes a simple contrastive learning framework which provides a new perspective on data augmentation with text input for this task. It can be extended to other similar tasks in language model fine-tuning. Besides, we propose an argument-context extraction (ACE) module to enhance information extraction by discarding irrelevant blocks. The experimental results show that our method achieves state-of-the-art performance + +on the benchmark dataset. Further analysis demonstrates the effectiveness of our proposed modules and visually displays more compact semantic representations. + +In the future, we might explore the following two research directions. On the one hand, we try to apply the framework to other computational argumentation tasks. On the other hand, we will explore the application of interactive argument identification in different fields, such as doctor consultation and student classroom discussion. + +# Limitations + +There may be some possible limitations in this study. We observe a few arguments that express little information. Its subjects are primarily pronouns, in which case our ACE module may be limited. For example, an argument is "no offense, but that is incredibly stupid/selfish". Since the sentence expresses only a small amount of information, semantic similarity may not fully reflect the correlation between sentences, which affects the ACE module to some extent. In addition, although the performance is significantly improved after adding contrastive learning and the construction of the hard samples, it also increases the computational resources during the training process. In the future, we will design a more universal contextual enhancement module by introducing graph neural networks. + +# Acknowledgements + +This research is supported by the National Natural Science Foundation of China (62077027), the program of China Scholarships Council (No.202007820024), the Ministry of Science and Technology of the People's Republic of China (2018YFC2002500), and the Department of Science and Technology of Jilin Province, China (20200801002GH). + +# References + +Tariq Alhindi and Debanjan Ghosh. 2021. "sharks are not the threat humans are": Argument component segmentation in school student essays. arXiv preprint arXiv:2103.04518. +Lauscher Anne, Ng Lily, Naples Courtney, and Tetreault Joel. 2020. Rhetoric, logic, and dialectic: Advancing theory-based argument quality assessment in natural language processing. *COLING*, pages 4563-4574. +S. C. Christa Asterhan and B. Baruch Schwarz. 2007. The effects of monological and dialogical argumentation on concept learning in evolutionary theory. JOURNAL OF EDUCATIONAL PSYCHOLOGY, pages 626-639. +Teresa Botschen, Daniil Sorokin, and Iryna Gurevych. 2018. Frame- and entity-based knowledge for common-sense argumentative reasoning. *ArgMining@EMNLP*, pages 90–96. +Liying Cheng, Lidong Bing, Qian Yu, Wei Lu, and Luo Si. 2020. Ape: Argument pair extraction from peer review and rebuttal via multi task learning. EMNLP 2020. +Yung-Sung Chuang, Rumen Dangovski, Hongyin Luo, Yang Zhang, Shiyu Chang, Marin Soljacic, Shang-Wen Li, Wen-tau Yih, Yoon Kim, and James Glass. 2022. Diffcse: Difference-based contrastive learning for sentence embeddings. arXiv preprint arXiv:2204.10298. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. north american chapter of the association for computational linguistics. +Ming Ding, Chang Zhou, Hongxia Yang, and Jie Tang. 2020. Cogltx: Applying bert to long texts. Advances in Neural Information Processing Systems, 33:12792-12804. +Subhabrata Dutta, Jeevesh Juneja, Dipankar Das, and Tanmoy Chakraborty. 2022. Can unsupervised knowledge transfer from social discussions help argument mining? arXiv preprint arXiv:2203.12881. +Steffen Eger, Johannes Daxenberger, and Iryna Gurevych. 2017. Neural end-to-end learning for computational argumentation mining. arXiv preprint arXiv:1704.06104. +Hongchao Fang, Sicheng Wang, Meng Zhou, Jiayuan Ding, and Pengtao Xie. 2020. Cert: Contrastive self-supervised learning for language understanding. arXiv preprint arXiv:2005.12766. +Andrea Galassi, Marco Lippi, and Paolo Torroni. 2018. Argumentative link prediction using residual networks and multi-objective learning. *ArgMining@EMNLP*, pages 1-10. + +Andrea Galassi, Marco Lippi, and Paolo Torroni. 2021. Multi-task attentive residual networks for argument mining. arXiv preprint arXiv:2102.12227. +Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. Simcse: Simple contrastive learning of sentence embeddings. arXiv preprint arXiv:2104.08821. +Beliz Gunel, Jingfei Du, Alexis Conneau, and Ves Stoyanov. 2020. Supervised contrastive learning for pretrained language model fine-tuning. arXiv preprint arXiv:2011.01403. +Ivan Habernal, Henning Wachsmuth, Iryna Gurevych, and Benno Stein. 2018. The argument reasoning comprehension task: Identification and reconstruction of implicit warrants. *NAACL-HLT*, pages 1930–1940. +Lu Ji, Zhongyu Wei, Xiangkun Hu, Yang Liu, Qi Zhang, and Xuanjing Huang. 2018. Incorporating argument-level interactions for persuasion comments evaluation using co-attention model. COLING, pages 3703-3714. +Yohan Jo, Jacky Visser, Chris Reed, and Eduard Hovy. 2019. A cascade model for proposition extraction in argumentation. In Proceedings of the 6th Workshop on Argument Mining, pages 11-24. Association for Computational Linguistics. +Alex Kendall, Yarin Gal, and Roberto Cipolla. 2018. Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7482-7491. +Al Khalid Khatib, Yufang Hou, Henning Wachsmuth, Charles Jochim, Francesca Bonin, and Benno Stein. 2020. End-to-end argumentation knowledge graph construction. national conference on artificial intelligence. +Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan. 2020. Supervised contrastive learning. Advances in Neural Information Processing Systems, 33:18661-18673. +John Lawrence and Chris Reed. 2020. Argument mining: A survey. Computational Linguistics, 45(4):765-818. +Minghan Li and Eric Gaussier. 2021. Keybld: Selecting key blocks with local pre-ranking for long document information retrieval. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 2207-2211. +Minghan Li, Diana Nicoleta Popa, Johan Chagnon, Yagmur Gizem Cinar, and Eric Gaussier. 2021. The power of selecting key blocks with local pre-ranking for long document information retrieval. arXiv preprint arXiv:2111.09852. +Ilya Loshchilov and Frank Hutter. 2018. Fixing weight decay regularization in adam. + +Ji Lu, Wei Zhongyu, Li Jing, Zhang Qi, and Huang Xuanjing. 2021. Discrete argument representation learning for interactive argument pair identification. *NAACL-HLT*, pages 5467-5478. +Anastasios Lytos, Thomas Lagkas, Panagiotis Sarigianidis, and Kalina Bontcheva. 2019. The evolution of argumentation mining: From models to social media and emerging tools. Information Processing & Management, 56(6):102055. +Edward Ma. 2019. Nlp augmentation. https://github.com/makcedward/nlpaug. +Tobias Mayer, Elena Cabrio, and Serena Villata. 2020. Transformer-based argument mining for healthcare applications. *ECAI*, pages 2108-2115. +Gaku Morio, Hiroaki Ozaki, Terufumi Morishita, Yuta Koreeda, and Kohsuke Yanai. 2020. Towards better non-tree argument mining: Proposition-level biaffine parsing with task-specific parameterization. ACL, pages 3259-3266. +Vlad Niculae, Joonsuk Park, and Claire Cardie. 2017. Argument mining with structured svms and rnns. ACL. +Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32:8026-8037. +Ramon Ruiz-Dolz, Jose Alemany, Stella M Heras Barbara, and Ana Garcia-Fornes. 2021. Transformer-based models for automatic identification of argument relations: A cross-domain evaluation. IEEE Intelligent Systems, 36(6):62-70. +Gabriella Skitalinskaya, Jonas Klaff, and Henning Wachsmuth. 2021. Learning from revisions: Quality assessment of claims in argumentation at scale. EACL, pages 1718-1729. +Christian Stab and Iryna Gurevych. 2014. Annotating argument components and relations in persuasive essays. *COLING*, pages 1501–1510. +Chenhao Tan, Vlad Niculae, Cristian Danescu-Niculescu-Mizil, and Lillian Lee. 2016. Winning arguments: Interaction dynamics and persuasion strategies in good-faith online discussions. WWW, pages 613-624. +Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of machine learning research, 9(11). +Feng Wang and Huaping Liu. 2021. Understanding the behaviour of contrastive loss. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2495-2504. + +Zhongyu Wei, Yang Liu, and Yi Li. 2016. Is this post persuasive? ranking argumentative comments in online forum. ACL. +Thomas Wolf, Julien Chaumont, Lysandre Debut, Victor Sanh, Clement Delangue, Anthony Moi, Pierric Cistac, Morgan Funtowicz, Joe Davison, Sam Shleifer, et al. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45. +Xing Wu, Chaochen Gao, Liangjun Zang, Jizhong Han, Zhongyuan Wang, and Songlin Hu. 2021. Esimcse: Enhanced sample building method for contrastive learning of unsupervised sentence embedding. arXiv preprint arXiv:2109.04380. +Jian Yuan, Zhongyu Wei, Yixu Gao, Wei Chen, Yun Song, Donghua Zhao, Jinglei Ma, Zhen Hu, Shaokun Zou, Donghai Li, et al. 2021a. Overview of smp-cail2020-argmine: The interactive argument-pair extraction in judgement document challenge. Data Intelligence, 3(2):287-307. +Jian Yuan, Zhongyu Wei, Donghua Zhao, Qi Zhang, and Changjian Jiang. 2021b. Leveraging argumentation knowledge graph for interactive argument pair identification. ACL/IJCNLP, pages 2310-2319. + +Algorithm 1: Block Segmentation +Input: Context $c$ , Punctuation costs cost, basic cost $co$ , max block size $\alpha$ +1 Initialize $f[0] \ldots f[\alpha - 1]$ as 0; +2 Initialize from [0] ... from $[\alpha - 1]$ as -1; +3 for i from $\alpha$ to len $(c) - 1$ do +4 $f[i] = +\infty$ ; +5 for $j$ form $i - \alpha$ to $i - 1$ do +6 if word is punctuation then +7 $\begin{array}{r}v = \text{cost}[word] + f[j]; \end{array}$ +8 else +9 $\begin{array}{r}v = co + f[j]; \end{array}$ +10 if $v < f[i]$ then +11 $\begin{array}{r}f[i] = v, \text{from} [i] = j \end{array}$ +12 $t = \text{len} (c) - 1, \text{blocks} = []$ ; +13 while $t \geq 0$ do +14 $\begin{array}{r}\text{prepend } c[\text{from } [t] + 1...t] \text{ to blocks.} \\t = \text{from } [t] \end{array}$ +15 return blocks + +# B Hyperparameter Sensitivity Analysis + +In this section, we investigate the impact of the two hyperparameters on our method. $\alpha$ (detailed in section 3.2.1) is the block size in context block segmentation module. It not only affects the result of similarity calculation between quotation-reply and each block but also determines the number of blocks input to the model because of the length limitation of BERT. $\tau$ (detailed in section 3.3.3) is an scalar temperature hyperparameter that controls the separation of classes. The following is the specific analysis of the two hyperparameters. + +# B.1 The Effect of $\alpha$ on Performance + +To exploring the impact of $\alpha$ , we set the value of $\alpha \in \{16, 32, 42, 48, 64\}$ . Accordingly, the number of input blocks $num \in \{16, 8, 6, 6, 4\}$ because the input length limitation of BERT. The results are shown in Table 8. Here we explain how to set the number of blocks. In the theory, the number and size of input blocks should satisfy the equation 5. However, in practice, we observe that the size of each block is often smaller than the block size we set because the length of each sentence is uncertain. For example, we set $\alpha = 64$ . In practice, the length of most of the blocks is less + +
P@1(%)MRR(%)
τ = 0.0380.6288.97
τ = 0.0581.0388.78
τ = 0.180.6988.67
τ = 0.1579.9588.38
τ = 0.279.8188.30
τ = 0.479.6888.24
τ = 0.679.3488.10
τ = 0.879.0987.88
+ +Table 7: The results with different $\tau$ . The best results are highlighted in bold. + +
P@1(%)MRR(%)
num = 16, α = 1673.1383.97
num = 8, α = 3274.4184.82
num = 6, α = 4275.1185.67
num = 6, α = 4874.1484.57
num = 4, α = 6475.0885.08
+ +Table 8: The results with different num,α. The best results are highlighted in bold. + +than 64. Therefore, when setting the number of blocks, we should satisfy $\alpha \times num \approx 256$ . num denotes the number of blocks in the input. The experimental results are shown in the Table 8. Note Table 8 shows the results of the experiment before the introduction of contrastive learning. When num = 6, $\alpha = 42$ , both metrics achieve the best results. We try to explain the phenomenon. When $\alpha$ is very small, the continuity of the sentences is limited, resulting in incoherent semantic information. When $\alpha$ is very large, the excessive block size will inevitably lead to irrelevant information in the sentence blocks, which affects the identification of the model. In addition, compared to num = 6, $\alpha = 42$ and num = 4, $\alpha = 64$ , num = 6, $\alpha = 48$ has a significant performance degradation. We consider that the input truncation causes information loss because of $6 \times 48 = 288 \gg 256$ . Therefore, the optimal combination is actually a trade-off, and in other experiments, we use the num = 6, $\alpha = 42$ . + +# B.2 The Effect of $\tau$ on Performance + +As mentioned by (Wang and Liu, 2021; Gunel et al., 2020), contrastive loss is a hardness-aware loss. $\tau$ controls the strength of penalties on hard negative samples. Small $\tau$ tends to generate more uniform distribution and be less tolerant to similar samples. In this section, we explore the impact of $\tau$ on this task. We set the value of + +$\tau \in \{0.03, 0.05, 0.1, 0.15, 0.2, 0.4, 0.6, 0.8\}$ . The results are shown in Table 7. From the experimental results, $\tau = 0.05$ is the optimal hyperparameter. Besides, with the increase of T, the experimental results become worse and worse. In other experiments, we use $\tau = 0.05$ . + +# C Error Analysis + +We observe two main problems with our methods for some instances of wrong predictions: + +- As mentioned in above, a few arguments that express little information and whose subjects are primarily pronouns, in which case our ACE module may be limited. For example, an argument is "no offense, but that is incredibly stupid/selfish." Since the sentence expresses only a small amount of information, semantic similarity may not fully reflect the correlation between sentences, which affects the ACE module to some extent, which may affect the ACE module to some extent. In this case, it might be better to use the adjacent context block directly. + +- In addition, some contexts are relatively short, even less than 200 words. At this time, the ACE module uses all the contexts as the input of the model and may add some information that is not relevant to the argument, which is one of the reasons for the wrong prediction of the model. \ No newline at end of file diff --git a/asimplecontrastivelearningframeworkforinteractiveargumentpairidentificationviaargumentcontextextraction/images.zip b/asimplecontrastivelearningframeworkforinteractiveargumentpairidentificationviaargumentcontextextraction/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..53d49297711798f6dfe93be509de7313cff0f8a4 --- /dev/null +++ b/asimplecontrastivelearningframeworkforinteractiveargumentpairidentificationviaargumentcontextextraction/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4b15e941677a7274e7088be2b7ddaffa161354b0a8d1ed93ddad93524cdeb6f1 +size 539382 diff --git a/asimplecontrastivelearningframeworkforinteractiveargumentpairidentificationviaargumentcontextextraction/layout.json b/asimplecontrastivelearningframeworkforinteractiveargumentpairidentificationviaargumentcontextextraction/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..b90db12e07338509a35cc41f35e6642392d5257f --- /dev/null +++ b/asimplecontrastivelearningframeworkforinteractiveargumentpairidentificationviaargumentcontextextraction/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:17d96929248fd70941ca9ce53e861b08e8b21866be1bb7f0529e1fb2c0f031b7 +size 470673 diff --git a/aspanbasedmultimodalvariationalautoencoderforsemisupervisedmultimodalnamedentityrecognition/d39f6fe7-0750-4f2a-98a1-6a15d7d78145_content_list.json b/aspanbasedmultimodalvariationalautoencoderforsemisupervisedmultimodalnamedentityrecognition/d39f6fe7-0750-4f2a-98a1-6a15d7d78145_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..9ce42f662811aa53381cb605f7724b6becabb7ff --- /dev/null +++ b/aspanbasedmultimodalvariationalautoencoderforsemisupervisedmultimodalnamedentityrecognition/d39f6fe7-0750-4f2a-98a1-6a15d7d78145_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:44dd4846dec414b120ea8a2b481bc3f79228a2e80d235508598a3a5e46f06244 +size 73292 diff --git a/aspanbasedmultimodalvariationalautoencoderforsemisupervisedmultimodalnamedentityrecognition/d39f6fe7-0750-4f2a-98a1-6a15d7d78145_model.json b/aspanbasedmultimodalvariationalautoencoderforsemisupervisedmultimodalnamedentityrecognition/d39f6fe7-0750-4f2a-98a1-6a15d7d78145_model.json new file mode 100644 index 0000000000000000000000000000000000000000..fccc44f6de11ce14f8701691e41c35a301504512 --- /dev/null +++ b/aspanbasedmultimodalvariationalautoencoderforsemisupervisedmultimodalnamedentityrecognition/d39f6fe7-0750-4f2a-98a1-6a15d7d78145_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:67e860368ed22b0888ece60fc3aca8955eb3933e8e16ff858672dc4fbfef4c6f +size 85034 diff --git a/aspanbasedmultimodalvariationalautoencoderforsemisupervisedmultimodalnamedentityrecognition/d39f6fe7-0750-4f2a-98a1-6a15d7d78145_origin.pdf b/aspanbasedmultimodalvariationalautoencoderforsemisupervisedmultimodalnamedentityrecognition/d39f6fe7-0750-4f2a-98a1-6a15d7d78145_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..bff8ecce5b9477298c530184eacf2077b859fb72 --- /dev/null +++ b/aspanbasedmultimodalvariationalautoencoderforsemisupervisedmultimodalnamedentityrecognition/d39f6fe7-0750-4f2a-98a1-6a15d7d78145_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:481f9f6f286ce78a08a52462e5df7cd0ec83ba1dbf54403ac56f66ce10783e6c +size 1946419 diff --git a/aspanbasedmultimodalvariationalautoencoderforsemisupervisedmultimodalnamedentityrecognition/full.md b/aspanbasedmultimodalvariationalautoencoderforsemisupervisedmultimodalnamedentityrecognition/full.md new file mode 100644 index 0000000000000000000000000000000000000000..5f8121b63cb4c7dedde59233eb79ec4651e80e4d --- /dev/null +++ b/aspanbasedmultimodalvariationalautoencoderforsemisupervisedmultimodalnamedentityrecognition/full.md @@ -0,0 +1,251 @@ +# A Span-based Multimodal Variational Autoencoder for Semi-supervised Multimodal Named Entity Recognition + +Baohang Zhou $^{1,2}$ , Ying Zhang $^{1,2,*}$ , Kehui Song $^{1,2}$ , Wenya Guo $^{1,2}$ , Guoqing Zhao $^{3}$ , Hongbin Wang $^{3}$ , Xiaojie Yuan $^{1,2}$ + +1 College of Computer Science, Nankai University, Tianjin, China + +$^{2}$ Tianjin Key Laboratory of Network and Data Security Technology, Tianjin, China + +3Mashang Consumer Finance Co, Ltd + +{zhoubaohang,zhangying,songkehui,guowenya}@dbis.nankai.edu.cn + +{guoqing.zhao02,hongbin.wang02}@msxf.com, yuanxj@nankai.edu.cn + +# Abstract + +Multimodal named entity recognition (MNER) on social media is a challenging task which aims to extract named entities in free text and incorporate images to classify them into user-defined types. The existing semi-supervised named entity recognition methods focus on the text modal and are utilized to reduce labeling costs in traditional NER. However, the previous methods are not efficient for semi-supervised MNER. Because the MNER task is defined to combine the text information with image one and needs to consider the mismatch between the posted text and image. To fuse the text and image features for MNER effectively under semi-supervised setting, we propose a novel span-based multimodal variational autoencoder (SMVAE) model for semi-supervised MNER. The proposed method exploits modal-specific VAEs to model text and image latent features, and utilizes product-of-experts to acquire multimodal features. In our approach, the implicit relations between labels and multimodal features are modeled by multimodal VAE. Thus, the useful information of unlabeled data can be exploited in our method under semi-supervised setting. Experimental results on two benchmark datasets demonstrate that our approach not only outperforms baselines under supervised setting, but also improves MNER performance with less labeled data than existing semi-supervised methods. + +# 1 Introduction + +Multimodal named entity recognition (MNER) has become a fundamental task to extract named entities from unstructured texts and images on social media (Moon et al., 2018). Compared with traditional named entity recognition (NER), MNER on social media poses the unique challenge that bridging the semantic gap between the posted texts and images is critical to extracting named entities. Therefore, the existing MNER models utility + +![](images/23e5d3b69909ad43658cfa8d1d28dae56619a405c4a25cd5d9ef18a8a9ea2c70.jpg) +If glycol $\mathrm{I}$ has pyramids, we can calculate the number of Team American Stillmore House + +![](images/acaa9bcb10967f8fa5055880c6b0aad3fe87e7df7517eccbae46af0bdaaa4a7b.jpg) +Because the world is tired of Daniel Craig PER, We are not sure what to do. + +![](images/bf3d0d9ce0da987e715225e4589a401f1ab0b02c01cde9fbd261fb2461cb885a.jpg) +One for the [Liverpool ORG] fans + +![](images/20462c75321f7cc51211f7fe5e71bc487a2b91d348f240f10c3a5e6e8635c02e.jpg) + +![](images/8dc45b2c97a23a9c9f6541701d6e53b98ec1895b63b0f58a97d3c95db4571447.jpg) +Get the new [Powers] App [App MISC] for quick access to +student; [district MISC] info. + +![](images/07d758f05ef97b78d0158b9a7ce7c829d5cf5a5045080dba81943a2c3186e20b.jpg) +Lise and shine at Jeru +Hall HALLO +Supervised +Multimodal Named Entity Recognition +Figure 1: The settings comparison between supervised and semi-supervised multimodal named entity recognition. For the labeled data, the named entities and their types are highlighted in brackets and different colors. + +![](images/175d0549d894093c1b078ac08164649f4a79c7838de1190f0b1e17ac794aef66.jpg) +Unlabeled Data + +![](images/64e55f9aa336f3d1146cd6471f5abf4359786c2327d4fdb6a8b7a8ab0c6a342b.jpg) +Labeled Data +Semi-supervised +Multimodal Named Entity Recognition + +lized cross-modal attention module to fuse the text and image features (Yu et al., 2020; Zhang et al., 2021). Besides, Xu et al. (2022) proposed cross-modal matching and alignment modules to make the representations of the texts and images more consistent. And to retain the useful image information for MNER, Liu et al. (2022) exploited the two-stage model to refine uncertain labels by fusing the features from the texts and images. + +To reduce labeling costs in MNER, semi-supervised learning is widely utilized to exploit the useful information of unlabeled data in text modal. Unlike the supervised setting with adequate labeled data, there are small amounts of labeled data and large amounts of unlabeled ones in semi-supervised settings as shown in Figure 1. Intuitive semi-supervised learning methods including: self-training (ST) (Yarowsky, 1995) and entropy minimization (EM) (Grandvalet and Bengio, 2004) use the pseudo labels generated by NER models for unlabeled data to train models. The NER task can be modeled as the sequence labeling problem, and SeqVAT (Chen et al., 2020) was proposed to combine virtual adversarial training + +(VAT) (Miyato et al., 2019) with conditional random field (CRF) (Lafferty et al., 2001) for semi-supervised sequence labeling. However, the existing semi-supervised NER methods are not efficient for MNER under semi-supervised setting. Because the previous methods are only focused on the text modal and MNER needs considering the semantic correlation between the texts and images of both labeled and unlabeled data. + +To overcome the above disadvantages of the existing methods, we propose the span-based multimodal variational autoencoder (SMVAE)1 for semi-supervised multimodal named entity recognition. The previous MNER models fused the sentence-level features and image ones for predicting sequence labels and had the difficulty to model multimodal features of unlabeled data under semi-supervised setting. Because the semantic correlation between sentences and images should be focused on the specific tokens. Therefore, the proposed method splits the texts into span-level tokens, and combines the span-level features of texts with image features for predicting labels of all spans in each text. SMVAE utilizes modal-specific VAEs to model latent representations of images and span-level texts respectively, and acquires the multimodal features by applying product-of-experts (PoE) (Hinton, 2002) on the latent representations of two modals. The prediction probabilities and multimodal features are exploited to reconstruct the input features for implicitly modeling the correlation between span label and multimodal features. Therefore, the useful information of unlabeled multimodal data can be exploited to improve the performance on MNER. The contributions of this manuscript can be summarized as follows: + +1. We analyze that the existing semi-supervised NER methods are not efficient for MNER under semi-supervised setting. To the best of our knowledge, we are the first one to focus on the semi-supervised MNER problem. +2. For semi-supervised MNER, we propose the span-based multimodal variational autoencoder to implicitly model the correlation between span label and multimodal features which takes advantage of unlabeled multimodal data effectively. +3. We compare the proposed model with the + +semi-supervised methods and state-of-the-art MNER models on two benchmark datasets under semi-supervised setting. The experimental results demonstrate that our model outperforms the baseline approaches. + +# 2 Related Work + +# 2.1 Multimodal Named Entity Recognition + +Moon et al. (2018) firstly extended the traditional text-based named entity recognition (NER) to the multimodal named entity recognition (MNER) by taking the images into account. The vital challenge of MNER is to fuse the text features with image features. Moon et al. (2018) proposed to utilize long short term memory networks (LSTM) to extract text features and convolution neural networks (CNN) to extract image features, and combine them with the modality attention module to predict sequence labels. Zhang et al. (2018) proposed an adaptive co-attention network to control the combination of text and image representations dynamically. To extract the image regions that are most related to the text, Lu et al. (2018) utilized the attention-based model to fuse the text and image features. Yu et al. (2020) proposed the uniform multimodal transformer that enhances the interactions of text and image modalities for the MNER task. With the development of multimodal knowledge graph, Chen et al. (2021) exploited the image attributes and semantic knowledge to improve the performance of MNER model. Considering to avoid the influence of mismatch between texts and images, Xu et al. (2022) proposed the cross-modal alignment and matching modules to fuse the text and image representations consistently. Besides, Liu et al. (2022) designed a two-stage model to combine the text features with image ones for refining uncertain labels. + +The above studies are under the supervised setting, and we focus on the semi-supervised MNER to reduce the labeling costs. Unlike the supervised learning with adequate labeled data, the semi-supervised learning is focused on utilizing the useful information of unlabeled data. + +# 2.2 Semi-supervised Learning for Named Entity Recognition + +For traditional named entity recognition, the labeled data is not always adequate because of the labeling costs. Therefore, semi-supervised learning is an important way to improve NER model + +performance without enough labeled data. Two widely used semi-supervised learning methods self-training (ST) (Yarowsky, 1995) and entropy minimization (EM) (Grandvalet and Bengio, 2004) has been proved the effectiveness on NER (Chen et al., 2020). Clark et al. (2018) proposed the cross-view training method to make the predictions consistently when utilizing the partial or full input. Considering to combine virtual adversarial training (VAT) (Miyato et al., 2019) with conditional random field (CRF) (Lafferty et al., 2001) for semi-supervised sequence labeling, Chen et al. (2020) proposed SeqVAT model to improve the robustness and accuracy on NER model. + +The existing methods are focused on the text modal, and semi-supervised MNER is proposed to take advantage of unlabeled multimodal data. Therefore, we make efforts on semi-supervised MNER to improve the performance of the model without adequate labeled multimodal data. + +# 3 Model + +Before getting into the details of the proposed model, we introduce the notations for semi-supervised MNER. The labeled and unlabeled datasets are denoted as $D_{l}$ and $D_{u}$ respectively. The unlabeled dataset $D_{u}$ with $|D_u|$ samples is formulated as $\{((\mathbf{S}_i^u,\mathbf{V}_i^u)\}_{i = 1}^{|D_u|}$ . And the labeled dataset $D_{l}$ with $|D_{l}|$ samples is defined as $\{(\mathbf{S}_i^l,\mathbf{V}_i^l,\mathbf{y}_i)\}_{i = 1}^{|D_l|}$ where $\mathbf{S}_i^l$ and $\mathbf{V}_i^l$ are the text and image of $i$ -th sample, and $\mathbf{y}_i$ is the task defined label for MNER. + +According to the conventional MNER studies (Moon et al., 2018), the input text is denoted as $\mathbf{S} = \{w_1, w_2, \ldots, w_{N_s}\}$ and the corresponding label sequence is $\mathbf{y} = \{y_1, y_2, \ldots, y_{N_s}\}$ for MNER. For instance, given a sentence $\mathbf{S} = \{\text{Anyway, the, best, Benz, in, the, world}\}$ , the label sequence is annotated as $\mathbf{y} = \{\mathrm{O}, \mathrm{O}, \mathrm{O}, \mathrm{B-PER}, \mathrm{O}, \mathrm{O}, \mathrm{O}\}$ with BIO2 tagging schema (Tjong Kim Sang and Veenstra, 1999). Unlike the existing MNER models that combine the whole sentence features with image features directly, we focus on the fine-grained correlation between the phrases of sentence and the image. Therefore, the span-level representations of each phrase in the sentence are utilized to predict the labels. And the label for the input text $\mathbf{S}$ is reformulated as named entity set $\mathbf{y} = \{y_k\}_{k=1}^{N_e}$ where $y_k$ is a tuple $(l_k, r_k, \bar{y})$ and $N_e$ is the number of named entities. $(l_k, r_k)$ is the span of an + +entity that corresponds to the phrase $\mathbf{S}_{(l_k,r_k)} = \{w_{l_k},w_{l_k + 1},\ldots ,w_{r_k}\}$ and $\bar{y}$ is the named entity type. For instance, the label for sentence $\mathbf{S} =$ {Anyway, the, best, Benz, in, the, world} is formulated as $\mathbf{y} = \{(4,4,\mathrm{PER})\}$ + +The SMVAE model is shown in Figure 2. For the multimodal data, we use BERT (Devlin et al., 2019) as text encoder to obtain the representations of sentences and ResNet (He et al., 2016) as visual encoder to obtain the regional representations of images. The proposed SMVAE consists of two modal-specific VAEs to acquire the latent representations of the two modality features. And we obtain the multimodal representations to predict the labels by applying product-of-experts (PoE) (Hinton, 2002) on the latent representations of two modalities. The latent representations and the labels are combined to reconstruct the input features in the modal-specific VAE for modeling the correlation between span label and multimodal features implicitly. Therefore, the unlabeled data can be exploited to improve the performance of MNER. + +# 3.1 Multimodal Feature Extraction + +Given the multimodal data as input, we need to preprocess them and map them into the dense representations for deep neural networks as shown in Figure 2. We denote the input text with $N_{s}$ words as $\mathbf{S} = \{w_1,w_2,\dots ,w_{N_s}\}$ . With the impressive performance of pre-trained language models, we utilize BERT (Devlin et al., 2019) to map the discrete words of sentence into the dense distributed representations. Before feeding the text into BERT, we should insert special tokens [CLS] and [SEP] into the start and end of the text. And the extended text is formulated as $\mathbf{S}' = \{w_0,w_1,\ldots ,w_{N_s + 1}\}$ where $w_{0}$ and $w_{N_s + 1}$ represent the special tokens respectively. The text feature extraction process can be simplified as $\mathbf{B} = \mathrm{BERT}(\mathbf{S}') = \{b_i\}_{i = 0}^{N_s + 1}$ . Considering to capture the contextual information further, we use BiLSTM networks for extracting hidden representations of the text. The extraction process can be defined as $\mathbf{H}^g = \mathrm{BiLSTM}(\mathbf{B};\theta_g) = \{\mathbf{h}_i^g\}_{i = 0}^{N_s + 1}$ and $\mathbf{H}^e = \mathrm{BiLSTM}(\mathbf{B};\theta_e) = \{\mathbf{h}_i^e\}_{i = 0}^{N_s + 1}$ where $\theta_{g}$ and $\theta_{e}$ are trainable weights in BiLSTM networks. As mentioned above, we focus on the span features and exploit them to predict the entities in the text. The spans of the text can be formulated as $\{\mathbf{S}_{(i,j)}|1\leq i\leq j\leq N_s\}$ where $\mathbf{S}_{(i,j)} = \{w_i,w_{i + 1},\dots ,w_j\}$ . And the global representations of spans are denoted as $\{\mathbf{c}_{(i,j)}^g |1\leq$ + +![](images/862b58eef9dab457d41232c064c397ecb72b51ed57de3e2aae0e5d59cbfeecd3.jpg) +Figure 2: The overall architecture of span-based multimodal variational autoencoder for semi-supervised MNER. + +$i\leq j\leq N_s\}$ where $\mathbf{c}_{(i,j)}^g = \frac{1}{j - i + 1}\sum_{k = i}^j\mathbf{h}_k^g.$ The edge representations of spans are calculated as $\{\mathbf{c}_{(i,j)}^e |1\leq i\leq j\leq N_s\}$ where $\mathbf{c}_{(i,j)}^e =$ $[\mathbf{h}_i^e;\mathbf{h}_j^e;\mathbf{h}_i^e -\mathbf{h}_j^e;\mathbf{h}_i^e\odot \mathbf{h}_j^e]$ and $\odot$ is the elementwise vector product. + +For the visual modality, we utilize ResNet (He et al., 2016) to extract the regional representations of images. Before feeding the image into ResNet, we resize the image to $224 \times 224$ pixels. The regional representations of the image $\mathbf{V} = \{v_{1}, v_{2}, \ldots, v_{49}\}$ are extracted from the last conventional layer of ResNet. We apply an average pooling layer on the regional representations, and the global feature of the image is calculated as $\mathbf{V}^{g} = \frac{1}{49} \sum_{i=1}^{49} v_{i}$ . + +# 3.2 Multimodal Variational Autoencoder + +To model the latent representations of the text and image modalities, the proposed SMVAE model consists of two modal-specific VAE networks named text-VAE and image-VAE. The encoders of VAEs contain dense layers to map the input features to the mean vector $\mu$ and standard deviation vector $\sigma$ . For the text modality, the global representations of spans $\mathbf{c}^g$ are fed into text-VAE to parameterize the mean vector $\mu_s$ and standard deviation vector $\sigma_s$ . The true posterior $p(\mathbf{z}^s|\mathbf{c}^g)$ can be approximated by the above parameters, and the distribution of $\mathbf{z}^s$ is formulated as $\mathbf{z}^s \sim q(\mathbf{z}^s|\mathbf{c}^g) = \mathcal{N}(\mu_s, \sigma_s^2)$ . Therefore, $\mu_s$ and $\sigma_s$ are computed by $\mu_s = \mathrm{FFNN}(\mathbf{c}^g; \theta_\mu^s)$ , $\sigma_s = \mathrm{FFNN}(\mathbf{c}^g; \theta_\sigma^s)$ where FFNN is short for feed-forward neural networks, and $\theta_\mu^s$ and $\theta_\sigma^s$ are trainable parameters in the encoder of + +text-VAE. For the visual modality, the global image features $\mathbf{V}^g$ are also fed into the encoder of the image-VAE. And the mean vector $\mu_v$ and standard deviation vector $\sigma_v$ for image latent representations are calculated as $\mu_v = \mathrm{FFNN}(\mathbf{V}^g; \theta_\mu^v)$ , $\sigma_v = \mathrm{FFNN}(\mathbf{V}^g; \theta_\sigma^v)$ where $\theta_\mu^v$ and $\theta_\sigma^v$ are trainable weights in the encoder of image-VAE. We exploit the above parameters to approximate the true posterior $p(\mathbf{z}^v | \mathbf{V}^g)$ , and the distribution of $\mathbf{z}^v$ is formulated as $\mathbf{z}^v \sim q(\mathbf{z}^v | \mathbf{V}^g) = \mathcal{N}(\mu_v, \sigma_v^2)$ . + +To bridge the semantic gap between the text and image representations, we need to calculate the multimodal features for predicting the results. The previous studies treated the text and image features as equals and mapped the concatenated features of the two modalities into the same latent representations (Khattar et al., 2019). However, there is the mismatch situation of the text and image that will introduce the noise into the model for predicting the result. We exploit the modal-specific VAEs to map the features of the two modalities into the respective latent representations with independent distributions. According to the assumption that two modalities are conditionally independent given the multimodal latent representations, the latent distribution $p(\mathbf{z}^m | \mathbf{c}^g, \mathbf{V}^g)$ of multimodal representations can be simplified as the combination of two individual latent distributions $p(\mathbf{z}^m | \mathbf{c}^g)$ and $p(\mathbf{z}^m | \mathbf{V}^g)$ . Therefore, we apply the product-of-experts (Hinton, 2002) (PoE) to estimate the multimodal latent distribution by $p(\mathbf{z}^m | \mathbf{c}^g, \mathbf{V}^g) \propto p(\mathbf{z}^m | \mathbf{c}^g) p(\mathbf{z}^m | \mathbf{V}^g) = q(\mathbf{z}^s | \mathbf{c}^g) q(\mathbf{z}^v | \mathbf{V}^g)$ . We assume the latent representations are independent + +Gaussian distributions with mean and standard deviation parameters. Therefore, the distribution of $\mathbf{z}^m$ is formulated as $\mathbf{z}^m\sim \mathcal{N}(\mu_m,\sigma_m^2)$ where $\mu_{m} = \frac{\mu_{s}\sigma_{v}^{2} + \mu_{v}\sigma_{s}^{2}}{\sigma_{s}^{2}\sigma_{v}^{2}}$ and $\sigma_{m}^{2} = (\sigma_{s}^{-2} + \sigma_{v}^{-2})^{-1}$ + +To train the model in an end-to-end way, we utilize the reparameterization strategy (Kingma and Welling, 2014) to sample the latent representations. The latent variable $\mathbf{z}^m$ for multimodal representations can be calculated as $\mathbf{z}^m = \mu_m + \sigma_m\odot \epsilon$ where $\epsilon \sim \mathcal{N}(0,\mathbf{I})$ . We utilize the multimodal features to predict the probabilities by $\hat{y} = \mathrm{FFNN}([\mathbf{z}^m;\mathbf{c}^e ];\theta_o)$ where $\theta_{o}$ is the trainable weights of the prediction FFNN. Given the annotated entity set $\mathbf{y}$ , the all negative instance candidates are defined as $\tilde{\mathbf{y}} = \{(l,r,\mathbf{O})|(l,r,\bar{y})\notin \mathbf{y},1\leq l\leq r\leq N_s,\bar{y}\in \mathcal{V}\}$ where $\mathcal{V}$ is the label space and $\mathrm{O}$ is the label for non-entity spans. To confirm the balanced class distribution of the samples in one batch, we randomly select a subset $\tilde{\mathbf{y}}^{\prime}$ from the candidate set $\tilde{\mathbf{y}}$ with the same size of $\mathbf{y}$ . The span-level cross entropy loss for training the model is defined as + +$$ +\mathcal {L} _ {1} = \sum_ {(i, j, \bar {y}) \in \tilde {\mathbf {y}} ^ {\prime} \cup \mathbf {y}} - \bar {y} \log \hat {y} _ {(i, j)} \tag {1} +$$ + +where $\hat{y}_{(i,j)}$ is the prediction probability for the phrase $\mathbf{S}_{(i,j)}$ . + +The decoders of SMVAE are trained to reconstruct the representations of samples. For the text modality, the span types are correlated to the representations of spans. Therefore, we combine the true labels of labeled data or prediction probabilities of unlabeled data with the text latent representations and feed them into the decoder of text-VAE. The reconstructed representation of span is calculated as $\hat{\mathbf{c}}^g = \mathrm{FFNN}([\mathbf{z}^s;\bar{y} ];\theta_d^s)$ for labeled data where $\mathbf{z}^{s} = \mu_{s} + \sigma_{s}\odot \epsilon$ . The latent representations of images are fed into the decoder of image-VAE directly and the reconstructed representation is calculated as $\hat{\mathbf{V}}^g = \mathrm{FFNN}(\mathbf{z}^v;\theta_d^v)$ where $\mathbf{z}^v = \mu_v + \sigma_v\odot \epsilon$ According to the evidence lower bound (ELBO) function of VAE (Kingma and Welling, 2014), the training loss for SMVAE on labeled data is formulated as follows: + +$$ +\begin{array}{l} \mathcal {L} _ {2} = \sum_ {(i, j, \bar {y}) \in \tilde {y} ^ {\prime} \cup \mathbf {y}} \| \mathbf {c} _ {(i, j)} ^ {g} - \hat {\mathbf {c}} _ {(i, j)} ^ {g} \| ^ {2} + \| \mathbf {V} ^ {g} - \hat {\mathbf {V}} ^ {g} \| ^ {2} \\ + \operatorname {K L} \left(q \left(\mathbf {z} ^ {s} \mid \mathbf {c} _ {(i, j)} ^ {g}\right) \| p \left(\mathbf {z} ^ {s}\right)\right) + \operatorname {K L} \left(q \left(\mathbf {z} ^ {v} \mid \mathbf {V} ^ {g}\right) \| p \left(\mathbf {z} ^ {v}\right)\right) \tag {2} \\ \end{array} +$$ + +where $\hat{\mathbf{c}}_{(i,j)}^g$ is the reconstructed representation of the phrase $\mathbf{S}_{(i,j)}$ . For the unlabeled data, the reconstructed representation of span is calculated as + +
ItemTwitter-2015Twitter-2017
TrainDevTestTrainDevTest
# Tweets4,0001,0003,2573,373723723
# PER entities2,2175521,8162,943626621
# LOC entities2,0915221,697731173178
# ORG entities9282478391,674375395
# MISC entities940225726701150157
+ +Table 1: The statistical information of two MNER benchmark datasets. + +$\hat{\mathbf{c}}^{g^{\prime}} = \mathrm{FFNN}([{\bf z}^{s};\hat{y} ];\theta_{d}^{s})$ . Considering that there are more non-entity spans than named entity ones in a sample, we only learn the latent representations for the latter. And the training loss for unlabeled data is defined as follows: + +$$ +\begin{array}{l} \mathcal{L}_{3} = \sum_{\substack{1\leq i\leq j\leq N_{s}\\ \hat{y}_{(i,j)}\neq \mathbf{0}}}\| \mathbf{c}_{(i,j)}^{g} - \hat{\mathbf{c}}_{(i,j)}^{g^{\prime}}\|^{2} + \| \mathbf{V}^{g} - \hat{\mathbf{V}}^{g}\|^{2} \\ + \operatorname {K L} \left(q \left(\mathbf {z} ^ {s} \mid \mathbf {c} _ {(i, j)} ^ {g}\right) \| p \left(\mathbf {z} ^ {s}\right)\right) + \operatorname {K L} \left(q \left(\mathbf {z} ^ {v} \mid \mathbf {V} ^ {g}\right) \| p \left(\mathbf {z} ^ {v}\right)\right) \tag {3} \\ \end{array} +$$ + +# 3.3 Training Procedure + +After acquiring the pre-processed multimodal labeled and unlabeled data, we feed them into the model to learn the latent representations of different modalities and extract the named entities. To train the model with different objectives at once, we introduce the hyper-parameter to sum Equation 1, Equation 2 and Equation 3. The overall loss function for the proposed model is defined as follows: + +$$ +\mathcal {L} = \lambda \cdot \mathcal {L} _ {1} + \mathcal {L} _ {2} + \mathcal {L} _ {3}. \tag {4} +$$ + +where $\lambda$ is the hyper-parameter to balance the different losses. We feed the multimodal data into the model and acquire the loss according to Equation 4. To train the parameter weights of the model, we utilize the stochastic gradient descent (SGD) methods to update them based on the overall loss. + +# 4 Experiments + +# 4.1 Datasets and Experiment Settings + +We compare the proposed model with the existing methods on the two widely used MNER datasets including: Twitter-2015 (Lu et al., 2018) and Twitter-2017 (Zhang et al., 2018). Each sample in the datasets is collected from Twitter and contains the text-image pair. There are four types of named entities including: Person (PER), Location (LOC), Organization (ORG) and others (MISC) that are annotated in the text. The detailed statistical information + +
MethodsTwitter-2015Twitter-2017
Single Type (F1)OverallSingle Type (F1)Overall
PERLOCORGMISCPRF1PERLOCORGMISCPRF1
Text
ST73.3046.9016.620.7756.8546.0550.8883.5348.3553.1117.8463.8162.3963.09
EM76.2950.128.520.7861.3046.2752.7381.6951.9548.801.4769.4559.3263.99
SeqVAT74.1758.2117.588.0460.9249.3254.5184.8260.1953.8711.1165.2666.4565.85
Multimodal
UMT+ST76.3058.4123.637.5255.4953.8654.6681.0360.1656.5813.9567.0760.9263.85
UMT+EM72.9165.8528.5113.9252.5958.0355.1779.9458.7454.0218.0062.8462.8462.84
UMT+SeqVAT70.3663.9428.0112.8952.1760.4256.0076.8261.1155.4819.7561.0363.8862.42
MAF+ST77.1852.4412.770.5257.1451.1854.1282.3151.4952.359.7670.8158.1863.88
MAF+EM76.2054.7426.096.3950.4757.3053.6777.6460.9052.4514.8856.3264.6260.19
MAF+SeqVAT74.0063.5429.9610.3452.6758.4155.3981.6661.8157.0319.6164.1565.4364.79
Ours78.3365.4438.047.9068.9255.7661.65*87.4058.3369.7632.8679.2769.3673.98*
+ +Table 2: Performance comparison on two MNER datasets under semi-supervised settings. The numbers with * indicate the improvement of our model over all baselines is statistically significant with $p \leqslant 0.05$ under t-test. + +of two datasets is shown in Table 1. To compare our model with baselines under the semi-supervised setting, we split the original training set of each dataset into two parts: labeled dataset $D_{l}$ and unlabeled one $D_{u}$ . To assume that we are working with a small amount of labeled data, we randomly select 100 samples from the original training set as $D_{l}$ and the remaining ones as $D_{u}$ . And we run the semi-supervised experiment with five random seeds, each with a different split of labeled and unlabeled datasets, and report the mean performance on test data. + +In the proposed model, we utilize the BERT-base $^2$ version of pre-trained language model BERT (Devlin et al., 2019) to extract text features, and use ResNet152 (He et al., 2016) to extract image features. The size of hidden layers is set to 768, and the dimension of latent variables is set to 100 for modal-specific VAEs. We set the learning rate to 1e-5 and batch size to 8 for training the model. And the hyper-parameter $\lambda$ in Equation 4 is set to $e^{(1 - \frac{|D_l|}{|D_l| + |D_u|})}$ . During the training process, we firstly train the model with the labeled and unlabeled set 100 epochs at most and test it on the development set. According to the early stopping strategy, we stop training the model when the F1 score on the development set does not increase within 10 epochs, and evaluate the best model on the test set. All experiments are accelerated by NVIDIA GTX 2080 Ti devices. + +# 4.2 Compared Methods + +Considering that there is no previous studies on semi-supervised MNER, we compare the proposed + +model with the widely used semi-supervised NER methods. The self-training (ST) and entropy minimization (EM) has been demonstrated the effectiveness on the semi-supervised NER (Chen et al., 2020). Besides, the existing state-of-the-art method SeqVAT combine virtual adversarial training (VAT) (Miyato et al., 2019) with conditional random field (CRF) (Lafferty et al., 2001) for semi-supervised sequence labeling (Chen et al., 2020). Therefore, we utilize BERT stacked with BiLSTM and CRF layers as the baseline model while applying ST, EM and SeqVAT methods based on it. + +The above baseline methods are only for text modality. Besides, we also combine the effective MNER models with the above semi-supervised learning methods as semi-supervised MNER baselines. The uniform multimodal transformer (UMT) (Yu et al., 2020) was proposed to enhance the interactions of text and image modalities for the MNER task and achieved impressive performance. Xu et al. (2022) proposed the general matching and alignment for MNER (MAF) to fuse the text and image representations consistently and gained the best performance. Therefore, the semi-supervised MNER baselines are the combinations of the above MNER models and semi-supervised NER methods including: UMT+ST, UMT+EM, UMT+SeqVAT, MAF+ST, MAF+EM and MAF+SeqVAT. + +# 4.3 Experimental Results + +We compare SMVAE with the baseline methods on two benchmark datasets under semi-supervised setting, and report the metrics of F1 score (F1) for every single type and overall precision (P), recall (R) and F1 score (F1). The detailed experi + +
MethodsTwitter-2015Twitter-2017
Single Type (F1)OverallSingle Type (F1)Overall
PERLOCORGMISCPRF1PERLOCORGMISCPRF1
Text
BERT84.7279.9158.2638.8168.3074.6171.3290.8884.0079.2561.6382.1983.7282.95
BERT-CRF84.7480.5160.2737.2969.2274.5971.8190.2583.0581.1362.2183.3283.5783.44
BERT-BiLSTM-CRF84.3279.3161.6637.5371.0373.5772.2790.2984.5580.9764.8583.2084.6883.93
Multimodal
GVATT-BERT-CRF84.4380.8759.0238.1469.1574.4671.7090.9483.5281.9162.7583.6484.3884.01
AdaCAN-BERT-CRF85.2880.6459.3938.8869.8774.5972.1590.2082.9782.6764.8385.1383.2084.10
MT-BERT-CRF85.3081.2161.1037.9770.4874.8072.5891.4782.0581.8465.8084.6084.1684.42
UMT-BERT-CRF85.2481.5863.0339.4571.6775.2373.4191.5684.7382.2470.1085.2885.3485.31
UMGF84.2683.1762.4542.4274.4975.2174.8591.9285.2283.1369.8386.5484.5085.51
UAMNer85.1481.6662.4640.9573.0274.7573.8791.8685.7184.2568.7386.1786.2386.20
MAF84.6781.1863.3541.8271.8675.1073.4291.5185.8085.1068.7986.1386.3886.25
Ours85.8281.5663.2043.6774.4075.7675.0791.9681.8984.1374.0785.7786.9786.37
+ +Table 3: Performance comparison on two MNER datasets under supervised settings. The MNER models are trained with the training set of Twitter-2015 and Twitter-2017. + +mental results on Twitter-2015 and Twitter-2017 are shown in Table 2. Our model can achieve the best results on most metrics, and the overall F1 scores of the proposed model increase $5.6\%$ and $9.2\%$ over baselines on two datasets respectively. The traditional semi-supervised NER method SeqVAT can achieve the best results over other baselines on the text modality, indicating that CRF combined with VAT for sequence modeling can improve the performance of models effectively. Therefore, the semi-supervised MNER methods including UMT+SeqVAT and MAF+SeqVAT can also gain the best overall F1 scores over than other baselines. Besides, the semi-supervised MNER methods can always gain better results than NER methods on Twitter-2015 but not on Twitter-2017. This situation verifies that the MNER baselines are not adapted to the low-resource setting and can not always make use of the multimodal features effectively under this setting. Our model can outperform the semi-supervised MNER baselines because we utilize the span features fused with image ones, and exploit modal-specific VAEs to jointly model multimodal latent representations and span labels for taking advantage of unlabeled data. Although SeqVAT can improve the robustness of sequence models, SMVAE can learn the multimodal latent representations and implicit correlation between it and labels that benefits semi-supervised MNER. + +# 4.4 Further Discussion + +To dig into the model, we conduct the analysis for presenting it in different aspects. We discuss the effect of the labeled data percent to the original training set and latent variable dimension. To demonstrate the effectiveness of SMVAE, we compare it + +![](images/d9fb2baf79db8d235a02719796d6918509cf019946d042b3346439302b0e68dc.jpg) +Figure 3: The performance of SMVAE under different settings vs. percent of the labeled data $D_{l}$ to the original training set $D_{l} \cup D_{u}$ . + +![](images/b4e47006d3f1d57eeddbf5ae4b774f1b274ef7f7ec6d88d7174fd252fd7f219a.jpg) + +with the superior MNER models under supervised setting and conduct ablation study to verify the usefulness of multimodal VAE. + +Effect of Labeled Dataset Size. We explore the SMVAE performance with the percent of labeled data to the original training data under different settings. As shown in Figure 3, the "supervised" indicates SMVAE is trained with the labeled data under supervised learning, and the "semi-supervised" means that SMVAE is trained with the labeled and unlabeled data under semi-supervised learning. Under the same percent of labeled data, the performance of semi-supervised SMVAE can outperform the supervised learning results which demonstrates the effectiveness of SMVAE taking advantage of unlabeled data. And with the increase of labeled data, the performance of the proposed model can achieve better results on two datasets. + +Supervised Setting. To verify the effectiveness of the proposed model with adequate labeled data, we compare SMVAE with state-of-the-art MNER models under supervised setting. The training sets of two datasets are used to train the model and eval + +
SettingsMethodsTwitter-2015Twitter-2017
PRF1PRF1
SupervisedOurs74.4075.7675.0785.4687.4286.43
w/o MVAE73.1775.5174.3285.1687.0586.09
Semi-supervisedOurs68.9255.7661.6579.2769.3673.98
w/o MVAE67.4854.5660.5177.2867.2171.89
+ +Table 4: The ablation study for SMVAE under different settings. "w/o MVAE" indicates that we turn off the multimodal VAE (MVAE) including text-VAE and image-VAE, and train the model for MNER. + +uate it on the test set. The conventional MNER models including GVATT-BERT-CRF (Lu et al., 2018), AdaCAN-BERT-CRF (Zhang et al., 2018), UMT-BERT-CRF (Yu et al., 2020) designed the interaction module to fuse text and image modalities. Besides, Zhang et al. (2021) proposed UMGF model to combine the fine-grained image information with text one in the constructed graph way and achieved the impressive performance. Recently, MAF (Xu et al., 2022) and UMANer (Liu et al., 2022) were proposed to make the text and image aligned, and fuse them in a consistent way. As shown in Table 3, SMVAE outperforms the baselines on most metrics, and the overall F1 scores of it increase $0.22\%$ and $0.12\%$ over best baselines on two datasets respectively. We find that all MNER models are better than text-based NER models, indicating that the image information on social media posts is helpful to extract named entities in text. Compared with above discriminative models, our model can learn the modal-specific latent representations, and fuse the text and image modality by applying PoE on them to estimate multimodal latent features for tackling MNER. + +Ablation Study. To investigate the effectiveness of multimodal VAE (MVAE) module in our model under different settings, we perform comparisons between the full model and the ablation method. The overall results of the models on two datasets are shown in Table 4. We find that the results of the model without MVAE are worse than the full model SMVAE under different settings which verifies the effectiveness of MVAE for tackling MNER. Further more, the ablation model performance degradation under supervised setting is lower than that under semi-supervised setting. Because there is adequate labeled data to train the model under supervised setting and MVAE plays an important role in SMVAE under semi-supervised setting. Under the low-resource settings, SMVAE can exploit MVAE module to jointly model the implicit correlation + +![](images/e29296f849889576cfe47b3e399fe40ab9a7eeb931d99ce04a7f53a9fcfc7445.jpg) +Figure 4: The performance of SMVAE under supervised setting vs. dimension of latent variable. + +![](images/b732478299b0a9e48b2921aa83fe56b455548f6c2b1abeba270b2c311fd8ab68.jpg) + +between multimodal representations and labels for making use of unlabeled data effectively. + +Effect of Latent Variable Dimension. The dimension of latent variables in MVAE is the key hyper-parameter to affect the performance of SM-VAE, and we discuss the effect of it to the model under supervised setting. We set the dimension range from 64 to 1024 and take 2 times as an adjustment step. As shown in Figure 4, the performance of the model changes with the various dimensions of latent variable. When the dimension of latent variable is set higher, the performance of the model is degraded more. The multimodal latent variable represents the fusion of text and image modality, and the higher dimension means that more image information is introduced into the model. When the semantic relations of text and image in social media posts are mismatched, the latent variable with higher dimension introduces more noise into the model and affects the performance on MNER. + +# 5 Conclusion + +In this manuscript, we propose the semi-supervised multimodal named entity recognition (MNER) task and pose the critical challenge of it compared with traditional semi-supervised named entity recognition (NER). Further more, we analyze the disadvantage of the existing semi-supervised NER methods that are not sufficient to multimodal data. Therefore, we propose the span-based multimodal variational autoencoder to tackle semi-supervised MNER. The proposed model exploits multimodal VAE including two modal-specific VAEs to learn the multimodal latent representations and jointly model the implicit correlation between labels and multimodal features to make use of unlabeled multimodal data effectively. The experimental results verify that our approach not only outperforms supervised learning baselines, but also gains superior + +results than semi-supervised learning methods. + +# 6 Limitations + +The proposed model is limited to the length of input sentence because it needs to predict the type of all candidate spans during inference time. And the number of spans is proportional to the length of the sentence. Therefore, the inference time is increased with the length of sentence. Besides, our model has poor scalability to process more than one image, and the posted Twitter message may contain more than one image. Therefore, the future MNER model should be able to process the text with more images. + +# Acknowledgements + +We thank the anonymous reviewers for the valuable comments on our manuscript. This research is supported by the Chinese Scientific and Technical Innovation Project 2030 (2018AAA0102100), the National Natural Science Foundation of China (No. 62272250, U1936206, U1903128, 62002178). + +# References + +Dawei Chen, Zhixu Li, Binbin Gu, and Zhigang Chen. 2021. Multimodal named entity recognition with image attributes and image knowledge. In Database Systems for Advanced Applications - 26th International Conference, volume 12682 of Lecture Notes in Computer Science, pages 186-201. Springer. +Luoxin Chen, Weitong Ruan, Xinyue Liu, and Jianhua Lu. 2020. SeqVAT: Virtual adversarial training for semi-supervised sequence labeling. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8801-8811, Online. Association for Computational Linguistics. +Kevin Clark, Minh-Thang Luong, Christopher D. Manning, and Quoc Le. 2018. Semi-supervised sequence modeling with cross-view training. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1914-1925, Brussels, Belgium. Association for Computational Linguistics. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. + +Yves Grandvalet and Yoshua Bengio. 2004. Semi-supervised learning by entropy minimization. In Advances in Neural Information Processing Systems 17, pages 529-536. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, pages 770-778. IEEE Computer Society. +Geoffrey E. Hinton. 2002. Training products of experts by minimizing contrastive divergence. *Neural Comput.*, 14(8):1771-1800. +Dhruv Khattar, Jaipal Singh Goud, Manish Gupta, and Vasudeva Varma. 2019. MVAE: multimodal variational autoencoder for fake news detection. In The World Wide Web Conference, pages 2915-2921. ACM. +Diederik P. Kingma and Max Welling. 2014. Autoencoding variational bayes. In 2nd International Conference on Learning Representations. +John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning, pages 282-289. +Luping Liu, Meiling Wang, Mozhi Zhang, Linbo Qing, and Xiaohai He. 2022. Uamner: uncertainty-aware multimodal named entity recognition in social media posts. Appl. Intell., 52(4):4109-4125. +Di Lu, Leonardo Neves, Vitor Carvalho, Ning Zhang, and Heng Ji. 2018. Visual attention model for name tagging in multimodal social media. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1990-1999, Melbourne, Australia. Association for Computational Linguistics. +Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii. 2019. Virtual adversarial training: A regularization method for supervised and semi-supervised learning. IEEE Trans. Pattern Anal. Mach. Intell., 41(8):1979-1993. +Seungwhan Moon, Leonardo Neves, and Vitor Carvalho. 2018. Multimodal named entity recognition for short social media posts. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 852-860, New Orleans, Louisiana. Association for Computational Linguistics. +Erik F. Tjong Kim Sang and Jorn Veenstra. 1999. Representing text chunks. In Ninth Conference of the European Chapter of the Association for Computational Linguistics, pages 173-179, Bergen, Norway. Association for Computational Linguistics. + +Bo Xu, Shizhou Huang, Chaofeng Sha, and Hongya Wang. 2022. MAF: A general matching and alignment framework for multimodal named entity recognition. In WSDM '22: The Fifteenth ACM International Conference on Web Search and Data Mining, Virtual Event / Tempe, AZ, USA, February 21 - 25, 2022, pages 1215-1223. ACM. +David Yarowsky. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In 33rd Annual Meeting of the Association for Computational Linguistics, pages 189-196, Cambridge, Massachusetts, USA. Association for Computational Linguistics. +Jianfei Yu, Jing Jiang, Li Yang, and Rui Xia. 2020. Improving multimodal named entity recognition via entity span detection with unified multimodal transformer. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3342-3352, Online. Association for Computational Linguistics. +Dong Zhang, Suzhong Wei, Shoushan Li, Hanqian Wu, Qiaoming Zhu, and Guodong Zhou. 2021. Multimodal graph fusion for named entity recognition with targeted visual guidance. In Thirty-Fifth AAAI Conference on Artificial Intelligence, pages 14347-14355. AAAI Press. +Qi Zhang, Jinlan Fu, Xiaoyu Liu, and Xuanjing Huang. 2018. Adaptive co-attention network for named entity recognition in tweets. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, pages 5674-5681. AAAI Press. \ No newline at end of file diff --git a/aspanbasedmultimodalvariationalautoencoderforsemisupervisedmultimodalnamedentityrecognition/images.zip b/aspanbasedmultimodalvariationalautoencoderforsemisupervisedmultimodalnamedentityrecognition/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..45e7ae55a5758f7cadc9e3a5798994b86d83e92a --- /dev/null +++ b/aspanbasedmultimodalvariationalautoencoderforsemisupervisedmultimodalnamedentityrecognition/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cd11cff521fcdd97675ee4b32a7df09d22c9e3a72bb354bf05e148c61f4be8d5 +size 463927 diff --git a/aspanbasedmultimodalvariationalautoencoderforsemisupervisedmultimodalnamedentityrecognition/layout.json b/aspanbasedmultimodalvariationalautoencoderforsemisupervisedmultimodalnamedentityrecognition/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..dcf2153b6a8357847369fc33123946293867c631 --- /dev/null +++ b/aspanbasedmultimodalvariationalautoencoderforsemisupervisedmultimodalnamedentityrecognition/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5d8c4c93fad62b3d11881dba59ad13d59f64ee469bd6a5c5161e63def39b4288 +size 355681 diff --git a/aspanlevelbidirectionalnetworkforaspectsentimenttripletextraction/839d4ae0-3375-4155-b7f0-fadbb81710ff_content_list.json b/aspanlevelbidirectionalnetworkforaspectsentimenttripletextraction/839d4ae0-3375-4155-b7f0-fadbb81710ff_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..ff0640ecb2a5a4f0d3c9b8e55f0409e5b0404107 --- /dev/null +++ b/aspanlevelbidirectionalnetworkforaspectsentimenttripletextraction/839d4ae0-3375-4155-b7f0-fadbb81710ff_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cef8e706533bc42495a54338c6eaeb4042c7d4ab1fed3d0ee6d0a0f866cd856c +size 75218 diff --git a/aspanlevelbidirectionalnetworkforaspectsentimenttripletextraction/839d4ae0-3375-4155-b7f0-fadbb81710ff_model.json b/aspanlevelbidirectionalnetworkforaspectsentimenttripletextraction/839d4ae0-3375-4155-b7f0-fadbb81710ff_model.json new file mode 100644 index 0000000000000000000000000000000000000000..c26ceb3500ae119c59dac81624e83d640043b1e3 --- /dev/null +++ b/aspanlevelbidirectionalnetworkforaspectsentimenttripletextraction/839d4ae0-3375-4155-b7f0-fadbb81710ff_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:12d2d54548f946827744e289d2522efc735da308c89d937d9ddb3223cad451d1 +size 89668 diff --git a/aspanlevelbidirectionalnetworkforaspectsentimenttripletextraction/839d4ae0-3375-4155-b7f0-fadbb81710ff_origin.pdf b/aspanlevelbidirectionalnetworkforaspectsentimenttripletextraction/839d4ae0-3375-4155-b7f0-fadbb81710ff_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ae711228e0a14fbfdaf4d7b5e3eea947e0ce0f60 --- /dev/null +++ b/aspanlevelbidirectionalnetworkforaspectsentimenttripletextraction/839d4ae0-3375-4155-b7f0-fadbb81710ff_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2fd6b9329da1dfe1a007434a44dbef3cc3895c3ed8327195b658a3dad80e3386 +size 2944693 diff --git a/aspanlevelbidirectionalnetworkforaspectsentimenttripletextraction/full.md b/aspanlevelbidirectionalnetworkforaspectsentimenttripletextraction/full.md new file mode 100644 index 0000000000000000000000000000000000000000..75233e39e0e51b161697965403979eeb7df817b5 --- /dev/null +++ b/aspanlevelbidirectionalnetworkforaspectsentimenttripletextraction/full.md @@ -0,0 +1,351 @@ +# A Span-level Bidirectional Network for Aspect Sentiment Triplet Extraction + +Yuqi Chen, Keming Chen * Xian Sun and Zequn Zhang + +Aerospace Information Research Institute + +Key Laboratory of Network Information System Technology(NIST) + +School of Electronic, Electrical and Communication Engineering + +University of Chinese Academy of Sciences + +chenyuqi19@mails.ucas.ac.cn,ckmdejob@hotmail.com,{sunxian,zqzhangl} $@$ mail.ie.ac.cn + +# Abstract + +Aspect Sentiment Triplet Extraction (ASTE) is a new fine-grained sentiment analysis task that aims to extract triplets of aspect terms, sentiments, and opinion terms from review sentences. Recently, span-level models achieve gratifying results on ASTE task by taking advantage of the predictions of all possible spans. Since all possible spans significantly increases the number of potential aspect and opinion candidates, it is crucial and challenging to efficiently extract the triplet elements among them. In this paper, we present a span-level bidirectional network which utilizes all possible spans as input and extracts triplets from spans bidirectionally. Specifically, we devise both the aspect decoder and opinion decoder to decode the span representations and extract triples from aspect-to-opinion and opinion-to-aspect directions. With these two decoders complementing with each other, the whole network can extract triplets from spans more comprehensively. Moreover, considering that mutual exclusion cannot be guaranteed between the spans, we design a similar span separation loss to facilitate the downstream task of distinguishing the correct span by expanding the KL divergence of similar spans during the training process; in the inference process, we adopt an inference strategy to remove conflicting triplets from the results base on their confidence scores. Experimental results show that our framework not only significantly outperforms state-of-the-art methods, but achieves better performance in predicting triplets with multi-token entities and extracting triplets in sentences contain multi-triplets1. + +# 1 Introduction + +Aspect-based sentiment analysis (ABSA) is an important field in natural language processing (NLP). + +![](images/b4baeabd8d9fa63aa9ca8e95f10977ea944cfcd83c1cfcbdf0e70b0f72a8a7cc.jpg) +Figure 1: An example of ABSA subtasks. The spans highlighted in blue are aspect terms. The spans in red are opinion terms. Sentiments are marked with green. + +The ABSA task contains various fundamental subtasks, such as aspect term extraction (ATE), opinion term extraction (OTE), and aspect-level sentiment classification (ASC). Recent studies focus on solving these tasks individually or doing a combination of two subtasks, such as aspect term polarity co-extraction (APCE), aspect opinion co-extraction (AOCE), and aspect-opinion pair extraction (AOPE). However, none of these subtasks aims to extract the aspect terms (AT) with their corresponding opinion terms (OT) and sentiment polarity (SP) simultaneously. To tackle this problem, (Peng et al., 2020) propose the aspect sentiment triplet extraction (ASTE) task which aims to extract (AT, OT, SP) triplets such as (hot dogs, top notch, positive) and (coffee, average, negative) in the example of Figure 1. + +To solve the ASTE task, recent works (Peng et al., 2020; Wu et al., 2020; Mao et al., 2021) use sequential token-level methods and formulate this task as a sequence tagging problem. Although these works achieve competitive results, their token-level models suffer from cascading errors due to sequential decoding. Therefore, (Xu et al., 2021) propose a span-level model to capture + +the span-to-span interactions among ATs and OTs by enumerating all possible spans as input. Despite the exciting results their work has yielded, several challenges remain with the existing span-level model. First, since both aspect terms and opinion terms can trigger triplets, it is a challenge to identify triplets bidirectionally. Second, unlike token-level methods, span-level input cannot guarantee mutual exclusivity among the spans, so the similar spans (spans that have shared tokens) such as hot dogs, dogs, and the hot dogs, may cause confusion in downstream tasks. Thus, it is challenging for span-level models to effectively distinguish these similar span. Third, the existence of similar spans enables span-level models to generate conflicting triples in the results, such as (hot dogs, top notch, positive), (hot dogs, are top notch, positive), and (the hot dogs, top notch, positive). How to properly extract non-conflicting triplets is also challenging. + +To address these challenges, we propose a span-level bidirectional network for ASTE task. Unlike prior span-level works (Xu et al., 2021), our network decodes all possible span representations from both aspect-to-opinion and opinion-to-aspect directions through the cooperation of the aspect decoder and opinion decoder. In the aspect-to-opinion direction, the aspect decoder aims to extract ATs such as {hot dogs, coffee}, and the opinion decoder aims to extract OTs such as {top notch} for each specific AT like {hot dogs}. Analogously, in the opinion-to-aspect direction, the opinion decoder and aspect decoder are utilized to extract OTs and their corresponding ATs, respectively. Furthermore, we design the similar span separation loss to direct the model deliberately distinguishing similar span representations during the training process; and an inference strategy employed in the prediction process is also proposed for eliminating the conflicting triplets in the extraction results. To verify the effectiveness of our framework, we conduct a series of experiments based on four benchmark datasets. The experimental results show our framework substantially outperforms the existing methods. In summary, our contributions are as follows: + +- We design a span-level bidirectional network to extract triplets in both aspect-to-opinion and opinion-to-aspect directions in a span-level model. By this design, our network can identify triplets more comprehensively. +- We propose the similar span separation loss to separate the representations of spans that + +contain shared tokens. Based on these differentiated span representations, downstream models can discriminate the span representation more precisely. + +- We design an inference strategy to eliminate the potential conflicting triplets due to the lack of mutual exclusivity among spans. + +# 2 Related Work + +Aspect based sentiment analysis (ABSA) is a fine-grained sentiment analysis task that consists of various subtasks, including aspect term extraction (ATE) (Wang et al., 2016; Li and Lam, 2017; Xu et al., 2018; Li et al., 2018; Ma et al., 2019), opinion term extraction (OTE) (Poria et al., 2016; Fan et al., 2019; Wu et al., 2020), aspect-level sentiment classification (ASC) (Dong et al., 2014; Tang et al., 2016; He et al., 2018; Li et al., 2019b). Since these subtasks are solved individually, recent studies attempted to couple two subtasks as a compound task, such as aspect term polarity co-extraction (APCE) (Li and Lu, 2017; He et al., 2019; Li et al., 2019a), aspect and opinion co-extraction (Qiu et al., 2011; Liu et al., 2013; Yu et al., 2019), aspect category and sentiment classification (Hu et al., 2019), and aspect-opinion pair extraction (AOPE) (Chen et al., 2020; Zhao et al., 2020; Gao et al., 2021), and aspect-opinion pair extraction (AOPE) (Gao et al., 2021; Zhao et al., 2020; Wu et al., 2021). Although many works have achieved great progress on these tasks, none of these tasks aims to identify the aspect terms as well as their corresponding opinion term and sentiment polarity. + +To tackle this issue, (Peng et al., 2020) proposed the aspect sentiment triplet extraction (ASTE) task, which aimed to extract aspect terms, the sentiments of the aspect terms, and the opinion terms causing the sentiments. Some methods (Xu et al., 2020; Wu et al., 2020) designed a unified tagging scheme to solve this task. Some others (Chen et al., 2021; Mao et al., 2021) formulated this task as a multi-turn machine reading comprehension task and solve it with machine reading comprehension frameworks. Recently, (Xu et al., 2021) had propose a span-level model to extract ATs and OTs first and then predict the sentiment relation for each (AT, OT) pairs. + +# 3 Methodology + +As shown in Figure 2, our network consists of four parts: span generation, similar span separation loss, + +![](images/b1de03551b89deffe7d5fc7257155cef28b549d9fbdadf59d8d181a54d2351db.jpg) +Figure 2: The overall architecture of our span-level bidirectional network. The blue arrows and modules as well as red arrows and modules indicate the extraction of aspect-to-opinion direction and the opinion-to-aspect direction, respectively. The process shown in the dotted line only proceeds in the inference. + +bidirectional structure, and the inference strategy. In the following subsections, we first give the definition of ASTE tasks and then detail our network structure. + +# 3.1 Task Definition + +For a sentence $S = \{w_{1},w_{2},\dots ,w_{n}\}$ consisting $n$ words, the goal of the ASTE task is to extract a set of aspect sentiment triplets $\mathcal{T} = \{(a,o,c)_k\}_{k = 1}^{|\mathcal{T}|}$ from the given sentence $S$ where $(a,o,c)$ refers to (aspect term, opinion term, sentiment polarity) and $c\in \{\mathrm{Positive, Neutral, Negative}\}$ . + +# 3.2 Span Generation + +Given a sentence $S$ with $n$ tokens, there are $m$ possible spans in total. Each span $\mathbf{s}_i = \{w_{start(i)},\dots ,w_{end(i)}\}$ is defined by all the tokens from start(i) to end(i) inclusive, and the maximum length of span $\mathbf{s}_i$ is $l_{s}$ : + +$$ +1 \leq \operatorname {s t a r t} (i) \leq \operatorname {e n d} (i) \leq n \tag {1} +$$ + +$$ +\operatorname {e n d} (i) - \operatorname {s t a r t} (i) \leq l _ {s} \tag {2} +$$ + +To obtain span representations, we need to get the token-level representations first. In this paper, we utilize BERT (Devlin et al., 2018) as a sentence encoder to obtain token-level contextualized representations $\{\mathbf{h}_1,\mathbf{h}_2,\dots ,\mathbf{h}_n\}$ of the given sentence $S$ . Then, the token-level representations are combined by max pooling. Note that various methods can be applied to generate the representations for spans, the effectiveness of these span generation methods will be investigated in the ablation study in Appendix. We define the representation of span $\mathbf{s}_i$ as: + +$$ +\mathbf {g} _ {i} = \operatorname {M a x} \left(\mathbf {h} _ {\text {s t a r t}} (i), \mathbf {h} _ {\text {s t a r t} + 1} (i), \dots , \mathbf {h} _ {\text {e n d}} (i)\right) \tag {3} +$$ + +where $Max$ represents max pooling. + +# 3.3 Similar Span Separation Loss + +After generating the representation of span, most previous models directly use the span representations for downstream tasks. However, enumerating all possible spans in a sentence inevitably generates lots of spans that have same tokens with each other, and the model may suffer from the limitations in processing these similar spans due to their adjacent distribution. To separate the spans with similar distributions, we propose a similar span separation loss based on KL divergence to separate similar spans, as shown in Figure 2. The similar span separation loss is defined as: + +$$ +\begin{array}{l} K L (\mathbf {g} _ {i} | | G _ {i}) = \\ \sum_ {j} ^ {G _ {i}} \operatorname {s o f t m a x} \left(\mathbf {g} _ {i}\right) \log \frac {\operatorname {s o f t m a x} \left(\mathbf {g} _ {i}\right)}{\operatorname {s o f t m a x} \left(\mathbf {g} _ {j}\right)} \tag {4} \\ \end{array} +$$ + +$$ +\begin{array}{l} K L \left(G _ {i} \| \mathbf {g} _ {i}\right) = \\ \sum_ {j} ^ {G _ {i}} \operatorname {s o f t m a x} (\mathbf {g} _ {j}) \log \frac {\operatorname {s o f t m a x} (\mathbf {g} _ {j})}{\operatorname {s o f t m a x} (\mathbf {g} _ {i})} \tag {5} \\ \end{array} +$$ + +$$ +\begin{array}{l} \mathcal {J} _ {K L} = \\ \sum_ {i} ^ {m} \log \left(1 + \frac {2}{K L \left(G _ {i} \mid \mid \mathbf {g} _ {i}\right) + K L \left(\mathbf {g} _ {i} \mid \mid G _ {i}\right)}\right) \tag {6} \\ \end{array} +$$ + +where $G_{i}$ indicates the set of the representations of spans which share at least one token with $\mathbf{s}_i$ . Note that we have not directly used the KL divergence as the separation loss but in combination with the $\log(1 + 1/x)$ function to achieve the effect that when KL divergence is small the separation loss is large and vice versa. + +# 3.4 Bidirectional Structure + +As the aspect sentiment triplet can be triggered by an aspect terms or an opinion terms, we propose a bidirectional structure to decode the span representations. As shown in Figure 2, the bidirectional structure consists of an aspect decoder and an opinion decoder. The details of each component in the bidirectional structure are given in the following subsections. + +# 3.4.1 Aspect-to-opinion Direction + +In aspect-to-opinion direction (Blue arrows and modules in Figure 2), the aspect decoder aims to extract all ATs along with their sentiment from the sentence. We can obtain the confidence score as well as the probability of the sentiment of AT as follows: + +$$ +u _ {i} ^ {a} = F F N N _ {a} \left(\mathbf {g} _ {i}, \theta_ {a}\right) \tag {7} +$$ + +$$ +q _ {i} ^ {a \rightarrow o, a} = \mathbf {w} _ {a \rightarrow o, a} u _ {i} ^ {a} \tag {8} +$$ + +$$ +\mathbf {p} _ {i} ^ {a \rightarrow o, a} = \operatorname {s o f t m a x} \left(q _ {i} ^ {a \rightarrow o, a}\right) \tag {9} +$$ + +where $FFNN_{A}$ represents the FFNN of aspect decoder, $\theta_{a}$ is the parameter for the FFNN, $\mathbf{w}_{a\rightarrow o,a} \in \mathbb{R}^{m\times c^0}$ is a trainable weight vector, and $c^0 \in \{Valid,Invalid\}$ is the number of categories. + +Then, giving a set $G_{a}$ of original span representations of all valid ATs $\mathbf{g}_j^a \in G_a$ , we apply the opinion decoder to identify all OTs along with their sentiment for each particular valid AT by exploiting attention mechanism. Similarly, we obtain the probability distribution of the OT's sentiment along with its confidence score via: + +$$ +u _ {i} ^ {o} = F F N N _ {o} \left(\mathbf {g} _ {i}, \theta_ {o}\right) \tag {10} +$$ + +$$ +\alpha_ {i, j} ^ {a \rightarrow o} = \frac {\exp \left(u _ {i} ^ {o}\right)}{\exp \left(\mathbf {g} _ {j} ^ {a}\right)} \tag {11} +$$ + +$$ +q _ {i, j} ^ {a \rightarrow o, o} = \mathbf {w} _ {a \rightarrow o, o} \left(u _ {i} ^ {o} + \alpha_ {i, j} ^ {a \rightarrow o} \cdot \mathbf {g} _ {j} ^ {a}\right) \tag {12} +$$ + +$$ +\mathbf {p} _ {i, j} ^ {a \rightarrow o, o} = \operatorname {s o f t m a x} \left(q _ {i, j} ^ {a \rightarrow o, o}\right) \tag {13} +$$ + +where $FFNN_{o}$ represents the FFNN of opinion decoder, $\theta_{o}$ is the parameter for the FFNN, $\mathbf{w}_{a\rightarrow o,o}\in \mathbb{R}^{m\times c^{*}}$ is a trainable weight vector, and $c^*\in \{\text{Positive},\text{Neutral},\text{Negative},\text{Invalid}\}$ is the number of sentiment polarity. Furthermore, we define the loss of aspect-to-opinion direction as: + +$$ +\begin{array}{l} \mathcal {J} _ {a \rightarrow o} = - \sum_ {i} y _ {i} ^ {a \rightarrow o, a} \log \left(q _ {i} ^ {a \rightarrow o, a}\right) \\ - \sum_ {i} \sum_ {j} ^ {G _ {a}} y _ {i, j} ^ {a \rightarrow o, o} \log \left(q _ {i, j} ^ {a \rightarrow o, o}\right) \tag {14} \\ \end{array} +$$ + +where $y_{i}^{a\rightarrow o,a}$ and $y_{i,j}^{a\rightarrow o,o}$ are ground truth labels of the sentiments for AT and OT given a specific valid AT, respectively. + +# 3.4.2 Opinion-to-aspect Direction + +As for opinion-to-aspect direction (Red arrows and modules in Figure 2), the opinion decoder is deployed first to extracts all the OTs along with their sentiment from the sentence. To minimize the number of model parameters, the opinion decoder in both aspect-to-opinion and opinion-to-aspect directions shares the FFNN features, as described in Equation (10). The probability distribution of the sentiments of OTs as well as the confidence scores can be obtained as: + +$$ +q _ {i} ^ {o \rightarrow a, o} = \mathbf {w} _ {o \rightarrow a, o} u _ {i} ^ {o} \tag {15} +$$ + +$$ +\mathbf {p} _ {i} ^ {o \rightarrow a, o} = \operatorname {s o f t m a x} \left(q _ {i} ^ {o \rightarrow a, o}\right) \tag {16} +$$ + +where $\mathbf{w}_{o\rightarrow a,o}\in \mathbb{R}^{m\times c^0}$ is a trainable weight vector. + +Given a set $G_{o}$ if original span representations of all valid OTs $\mathbf{g}_j^o\in G_o$ , the aspect decoder is deployed to identify the ATs and their sentiment for each particular valid OTs. Note that the aspect decoder in opinion-to-aspect direction also shares same FFNN features described in Equation (7) with the aspect decoder in aspect-to-opinion direction. The logits of ATs and their confidence scores in opinion-to-aspect direction can be obtained by: + +$$ +\alpha_ {i, j} ^ {o \rightarrow a} = \frac {\exp \left(u _ {i} ^ {a}\right)}{\exp \left(\mathbf {g} _ {j} ^ {o}\right)} \tag {17} +$$ + +$$ +q _ {i, j} ^ {o \rightarrow a, a} = \mathbf {w} _ {o \rightarrow a, a} \left(u _ {i} ^ {a} + \alpha_ {i, j} ^ {o \rightarrow a} \cdot \mathbf {g} _ {j} ^ {o}\right) \tag {18} +$$ + +$$ +\mathbf {p} _ {i, j} ^ {o \rightarrow a, a} = \operatorname {s o f t m a x} \left(q _ {i, j} ^ {o \rightarrow a, a}\right) \tag {19} +$$ + +where $\mathbf{w}_{o\rightarrow a,a}\in \mathbb{R}^{m\times c^*}$ is a trainable weight vector. + +Finally, the loss for opinion-to-aspect direction is defined as: + +$$ +\begin{array}{l} \mathcal {J} _ {o \rightarrow a} = - \sum_ {i} y _ {i} ^ {o \rightarrow a, o} \log \left(q _ {i} ^ {o \rightarrow a, o}\right) \\ - \sum_ {i} \sum_ {j} ^ {G _ {o}} y _ {i, j} ^ {o \rightarrow a, a} \log \left(q _ {i, j} ^ {o \rightarrow a, a}\right) \tag {20} \\ \end{array} +$$ + +where $y_{i}^{o\rightarrow a,o}$ and $y_{i,j}^{o\rightarrow a,a}$ are the ground truth labels. Then, we combine the above loss functions to form the loss objective of the entire model: + +$$ +\mathcal {J} = \mathcal {J} _ {K L} + \mathcal {J} _ {a \rightarrow o} + \mathcal {J} _ {o \rightarrow a} \tag {21} +$$ + +Algorithm 1 Inference Strategy +Input: $\mathcal{T}_{a\rightarrow o},\mathcal{T}_{o\rightarrow a}$ $T_{a\to o}$ denotes the triplet extraction results in aspect-to-opinion direction + $T_{o\to a}$ denotes the triplet extraction results in opinion-to-aspect direction +1: Get the overall triplets in both extract direc +tions $\mathcal{T} = \mathcal{T}_{a\to o}\cup \mathcal{T}_{o\to a}$ +2: for $t_i\in \mathcal{T}$ do +3: for $t_j\in (\mathcal{T} - \{t_i\})$ do +4: $t_i = (a_i,o_i,c_i,s_i),t_j = (a_j,o_j,c_j,s_j)$ $s_i$ and $s_j$ are the confidence score of the corresponding triplets +5: if $a_i\cap a_j\neq \emptyset$ and $o_i\cap o_j\neq \emptyset$ then +6: if $s_i > s_j$ then +7: $\mathcal{T} = \mathcal{T} - \{t_j\}$ +8: else +9: $\mathcal{T} = \mathcal{T} - \{t_i\}$ +10: end if +11: end if +12: end for +13: end for +14: return T + +# 3.5 Inference + +In contrast to the mutual exclusivity of the triplets in the token-level method, span-level model cannot guarantee that there are no conflicts between any two triples. Therefore, we propose an inference strategy to eliminate the potential conflicting triplets during the inference process. As illustrated in Algorithm 1, We first combine the extraction results in both directions by taking the union set $\mathcal{T}$ (line 1). Afterwards, for each pair of triplets in the overall triplets set $\mathcal{T}$ that have duplicates in both aspect $a$ and opinion $o$ (line 5), the conflicting results are eliminated by discarding the triplets with lower confidence scores $s$ (line 6-9). Note that in the condition of determining whether two triplets conflict with each other (line 5), the determination of whether the union set is empty is performed on the position index, rather than on the tokens. + +# 4 Experiments + +# 4.1 Datasets + +To verify the effectiveness of our network, we conduct experiments on four benchmark datasets $^2$ (Xu et al., 2020), which are constructed based on the original SemEval ABSA Challenges and the datasets of (Fan et al., 2019). Table 1 lists the + +
Datasets#SPOSNEUNEG#SW#MW
14LAPTrain126616921664801586752
Dev31040454119388189
Test49277366155657337
14RESTrain906817126517824636
Dev21916936141190156
Test32836463116291252
15RESTrain60578325205678335
Dev148185115316584
Test32231725143297188
16RESTrain857101550329918476
Dev2102521176216123
Test3264072978344170
+ +Table 1: Statistics of the datasets. '#S' denotes the numbers of sentence, 'POS', 'NEU', and 'NEG' denote the numbers of positive, neutral, and negative triplets respectively. '#SW' denotes the numbers of triplets where the ATs and OTs are single word spans. '#MW' denotes the numbers of triplets that at least one of the ATs or the OTs are multi-word spans. + +statistics of these datasets. + +# 4.2 Experimental Setting + +We adopt the cased base version of BERT (Devlin et al., 2018) in our experiments, which contains 110M parameters. During training, we use AdamW (Loshchilov and Hutter, 2017) to optimize the model parameters. The fine-tuning rate for BERT and the learning rate for other models are set to 1e-5 and 1e-4, respectively. Meanwhile, the mini-batch size is set to 16 and the dropout rate is set to 0.1. The maximum length of generated spans is set to 8. We train our framework in a total of 120 epochs on a NVIDIA Tesla V100 GPU. + +# 4.3 Evaluation + +To comprehensively evaluate the performance of different methods, we use precision, recall, F1-score as the evaluation metrics. The extracted ATs and OTs are considered correct if and only if predicted spans exactly match the ground truth spans. In the experiments, we select the testing results when the model achieves the best performance on the development set. + +# 4.4 Baselines + +To demonstrate the effectiveness of our network, we compare our method with the following baselines: + +- Peng-two-stage (Peng et al., 2020) is a two-stage pipeline model. Peng-two-stage extracts + +
14LAP14RES15RES16RES
PRF1PRF1PRF1PRF1
PENG-two-stage40.4047.2443.5044.1862.9951.8940.9754.6846.7946.7662.9753.62
JETt51.4842.6546.6570.2053.0260.4162.1447.2553.6872.1257.2063.41
JETo58.4743.6750.0067.9760.3263.9258.3551.4354.6764.7761.2962.98
GTS-BERT57.5251.9254.5870.9269.4970.2059.2958.0758.6768.5866.6067.58
Dual-MRC57.3953.8855.5871.5569.1470.3263.7851.8757.2168.6066.2467.40
B-MRC70.89*50.20*58.78*75.41*64.04*69.26*69.83*56.04*58.74*69.03*66.02*67.49*
Span-ASTE63.4455.8459.3872.8970.8971.8562.1864.4563.2769.4571.1770.26
Ours65.6859.8862.6576.3672.4374.3469.9360.4164.8271.5972.5772.08
+ +both aspect-sentiment pairs and opinion terms in the first stage. In the second stage, Peng-two-stage pairs up the extraction results into triplets via an relation classifier. + +- JET (Xu et al., 2020) is an end-to-end model which proposes a novel position-aware tagging scheme to jointly extracting the triplets. It also designs factorized feature representations so as to effectively capture the interaction among the triplet factors. +- GTS (Wu et al., 2020) is an end-to-end model which formulates ASTE as a unified grid tagging task. It first extracts the sentiment feature of each token, and then gets the initial prediction probabilities of toke pairs based on these token-level features. It also designs an inference strategy to exploit the potential mutual indications between different opinion factors and performs the final prediction. +- Dual-MRC (Mao et al., 2021) is a joint training model which consists of two machine reading comprehensions. One of the MRC is for aspect term extraction, and another is for aspect-oriented opinion term extraction and sentiment classification. +- B-MRC (Chen et al., 2021) formalizes the ASTE task as a multi-turn machine reading comprehension task, and proposes three types of queries to extract targets, opinions and the sentiment polarities of aspect-opinion pairs, respectively. +- Span-ASTE (Xu et al., 2021) considers all possible spans in a sentence and build the interaction between the whole spans of aspect terms and opinion terms when predicting their sentiment relation. They also propose a dual-channel span pruning strategy to ease the high + +Table 2: Precision (%), Recall (%), and F1 score (% on the test set of the ASTE tasks. State-of-the-art results are marked bold. * indicates that the result is reproduced by us. + +
KL LossJS LossEM LossCS Lossno Loss
Ours62.6562.3461.1761.5160.82
w/o IS61.4861.0560.4060.8260.08
\( Ours_{a\rightarrow o} \)62.1461.8660.0861.7060.66
w/o IS61.0961.1860.0860.6758.78
\( Ours_{o\rightarrow a} \)62.3862.0360.8861.7060.68
w/o IS61.7560.1559.9060.4159.38
+ +Table 3: Experimental results of the ablation study in 14LAP dataset ( $F1$ -score, $\%$ ). 'w/o IS' denotes not using the inference strategy described in Section 3.5. 'KL Loss', 'JS Loss', 'EM Loss', and 'CS Loss' denote that similar span separation loss is constructed base on KL divergence, JS divergence, Euclidean Metric, and Cosine Similarity, respectively. 'no Loss' means no application of similar span separation loss. + +computational cost caused by span enumeration. + +# 4.5 Main Results + +Table 2 reports the results of our framework and baseline models. According to the results, our framework achieves state-of-the-art performance on all datasets. Specifically, our framework surpasses the best baselines by an average of 2.3 F1-score on ASTE. This result demonstrates that our framework can take advantage of bidirectional decoding and efficiently distinguish the span representation. Although some of the recall scores are slightly lower than Span-ASTE, the increase in precision significantly outperforms the previous baselines in most datasets, which shows the higher prediction accuracy of our network. It is worth noting that BMRC and Dual-MRC achieve better performance than JET and PENG-two-stage. This is probably because BMRC and Dual-MRC formalize the ASTE task as a multi-turn machine reading comprehension task and benefit from asking the model questions. Unlike those approaches, Span-ASTE and our method both utilize the span-level interactions to handle the ASTE task and avoid the cascading errors. Moreover, our model outperforms Span-ASTE because our method identify the + +triplets from both aspect-to-opinion and opinion-to-aspect directions, rather than matching each aspect span and opinion span. Besides, our network also take advantage of similar span separation loss and inference strategy to overcome the drawback of mutual exclusivity absence among spans. + +# 4.6 Ablation Study + +To validate the origination of the significant improvement in our network, we conduct ablation experiments on 14LAP datasets. As shown in Table 3, our bidirectional model yields better results than unidirectional models, which clearly indicates the superiority of the collaboration in both two directions on decoding the span representations. And the inference results from opinion terms to aspect terms are better than the other direction, which may due to the simplicity of extracting opinion terms in the 14LAP dataset. + +Moreover, the inference strategy has exhibited the enhancement on model performance. However, the improvement brought by the inference strategy is not significant, because conflicting triplets tend to exist among multi-token results, and only a small percentage of triplets containing multi-token terms in 14LAP dataset. We believe the effect of the inference strategy will be more obvious in datasets enriched with multi-token triplets. + +In addition, to demonstrate the effectiveness of our proposed similar span separation loss based on KL divergence, we further design similar range separation losses based on JS divergence, Euclidean distance and cosine similarity. The experimental results show that all these loss functions have a boosting effect on our network, and the separation loss based on KL divergence performs the best. Note that numerous similarity measures can be used to separate similar spans, among which there may be some better measures that can bring more improvement to the model. + +# 4.7 Effect of Entity Length + +To investigate the performance of different methods on the ATE and OTE with different entity lengths, we report the F1 scores of our framework, Span-ASTE, GTS, and B-MRC on the extraction task with different lengths of entities. The results are illustrated in Figure 3. As the entity length increases, the performance gap between our framework and other models becomes more obvious. Since our method directly models span-level feature for each entity and and alleviates the drawback of no mu + +![](images/6759baa17ecd1e30e7a2b8266230519891000536bc489bb7714f362eda8c2f80.jpg) +(a) ATE on 14LAP + +![](images/015115db086b5a25864436c151783c395504bb16bdb3b6c7271fbfa394962caf.jpg) +(b) OTE on 14LAP +Figure 3: Effects of entity length for aspect term extraction and opinion term extraction (F1-score, %). + +tual exclusivity among spans, our method will not be greatly affected with entity lengths increasing. In fact, most of the contribution to the improvement in our model comes from the performance in multi-token entities. + +# 4.8 Effect of Multiple Triplets + +
All1234≥5
Ours62.6562.8063.5464.8157.1460.46
Span-ASTE59.6561.1861.0161.6756.0035.00
GTS58.5758.2160.2665.0056.5730.77
B-MRC58.7858.1064.7555.3135.560.00
+ +Table 4: Effects of multiple triplets in a sentence in 14LAP (F1-score, %). + +To further verify the ability of our framework to handle multiple triplets, we compare the performance of our network and other baselines on ASTE task with different number of triplets in the sentences, and the results are shown in Table 4. We divide the sentences in 14LAP testset into 5 subclasses. Each subclass contains sentences with 1, 2, 3, 4, or $\geq 5$ triplets, respectively. When extracting triplets from sentences that contain 1 or 2 triplets, the performance of our framework is competitive to other models. However, when the number of triplets increases, the performance of SpanASTE, GTS, and B-MRC decrease significantly, while the performance of our network remains stable or even slightly increases. These experimental results demonstrate the efficiency and stability of our framework in handling multiple triplets in a sentence. + +# 5 Conclusions + +In this work, we propose a span-level bidirectional network for ASTE tasks. This span-level model can take advantages from both aspect-to-opinion and opinion-to-aspect directions. The bidirectional decoding can ensure that either an AT or an OT can + +trigger an aspect sentiment triplet, which is more in line with human perception. For the shortcoming that mutual exclusivity cannot be guaranteed among spans, we deploy the similar span separation loss to guide the model in discriminating similar spans. We further design an inference strategy to eliminate conflicting triplet results that are specific to span-level models. The experimental results demonstrate that our network significantly outperforms the compared baselines and achieves state-of-the-art performance. + +# Limitations + +Although in the previous section we showed the advanced performance of the network we designed, there are still some weaknesses in our model. + +
MACs(G)Params(M)
Ours120.044129.884
Span-ASTE444.55110
B-MRC19.62485.611
GTS520.76588.006
+ +Table 5: Efficiency Comparison. + +First, our model uses spans as input, and enumerating all possible spans inevitably increases the input size of the model, so span-level models tend to have larger computations than token-level models. As shown in Table 5, our network requires about 6 times more floating-point computations than the B-MRC model. While the Span-ASTE and GTS models require more computation, this is because Span-ASTE needs to match every aspect terms and opinion terms and GTS model extracts triplets by classifying the internal elements of a square matrix consisting of sentences in rows and columns. + +Second, to reduce the input size of the model, we set the maximum span length of the spans to 8 to include as many potential aspect terms and opinion terms as possible. However, in some datasets with long extraction targets, the span-level model must increase the maximum span limit, thus affecting the performance of the model. Therefore, our model is suitable only for the tasks of extracting short targets. + +Third, both the similar span separation loss and inference strategy proposed in this paper are used to alleviate the shortcoming of the missing mutual exclusivity in span-level models, while the inputs of token-level models are naturally mutually exclusive. So the similar range separation loss and in + +ference strategies are not applicable to token-level models. + +# Ethics Statement + +This article does not contain any study with human participants or animals performed by any of the authors. And all authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. + +# References + +Shaowei Chen, Jie Liu, Yu Wang, Wenzheng Zhang, and Ziming Chi. 2020. Synchronous double-channel recurrent network for aspect-opinion pair extraction. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 6515-6524. Association for Computational Linguistics. +Shaowei Chen, Yu Wang, Jie Liu, and Yuelin Wang. 2021. Bidirectional machine reading comprehension for aspect sentiment triplet extraction. arXiv preprint arXiv:2103.07665. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. +Li Dong, Furu Wei, Chuanqi Tan, Duyu Tang, Ming Zhou, and Ke Xu. 2014. Adaptive recursive neural network for target-dependent twitter sentiment classification. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014, June 22-27, 2014, Baltimore, MD, USA, Volume 2: Short Papers, pages 49-54. The Association for Computer Linguistics. +Zhifang Fan, Zhen Wu, Xin-Yu Dai, Shujian Huang, and Jiajun Chen. 2019. Target-oriented opinion words extraction with target-fused neural sequence labeling. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 2509-2518. Association for Computational Linguistics. +Lei Gao, Yulong Wang, Tongcun Liu, Jingyu Wang, Lei Zhang, and Jianxin Liao. 2021. Question-driven span labeling model for aspect-opinion pair extraction. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 12875-12883. AAAI Press. + +Ruidan He, Wee Sun Lee, Hwee Tou Ng, and Daniel Dahlmeier. 2018. Exploiting document knowledge for aspect-level sentiment classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 2: Short Papers, pages 579-585. Association for Computational Linguistics. +Ruidan He, Wee Sun Lee, Hwee Tou Ng, and Daniel Dahlmeier. 2019. An interactive multi-task learning network for end-to-end aspect-based sentiment analysis. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 504-515. Association for Computational Linguistics. +Mengting Hu, Shiwan Zhao, Li Zhang, Keke Cai, Zhong Su, Renhong Cheng, and Xiaowei Shen. 2019. CAN: constrained attention networks for multi-aspect sentiment analysis. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 4600-4609. Association for Computational Linguistics. +Hao Li and Wei Lu. 2017. Learning latent sentiment scopes for entity-level sentiment analysis. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA, pages 3482-3489. AAAI Press. +Xin Li, Lidong Bing, Piji Li, and Wai Lam. 2019a. A unified model for opinion target extraction and target sentiment prediction. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, pages 6714-6721. AAAI Press. +Xin Li, Lidong Bing, Piji Li, Wai Lam, and Zhimou Yang. 2018. Aspect term extraction with history attention and selective transformation. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden, pages 4194-4200. ijcai.org. +Xin Li and Wai Lam. 2017. Deep multi-task learning for aspect term extraction with memory interaction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 2886-2892. Association for Computational Linguistics. +Zheng Li, Ying Wei, Yu Zhang, Xiang Zhang, and Xin Li. 2019b. Exploiting coarse-to-fine task transfer for aspect-level sentiment classification. In The + +Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, pages 4253-4260. AAAI Press. +Kang Liu, Heng Li Xu, Yang Liu, and Jun Zhao. 2013. Opinion target extraction using partially-supervised word alignment model. In *IJCAI* 2013, Proceedings of the 23rd International Joint Conference on Artificial Intelligence, Beijing, China, August 3-9, 2013, pages 2134-2140. *IJCAI/AAAI*. +Ilya Loshchilov and Frank Hutter. 2017. Fixing weight decay regularization in adam. CoRR, abs/1711.05101. +Dehong Ma, Sujian Li, Fangzhao Wu, Xing Xie, and Houfeng Wang. 2019. Exploring sequence-to-sequence learning in aspect term extraction. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 3538-3547. Association for Computational Linguistics. +Yue Mao, Yi Shen, Chao Yu, and Longjun Cai. 2021. A joint training dual-mrc framework for aspect based sentiment analysis. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 13543-13551. AAAI Press. +Haiyun Peng, Lu Xu, Lidong Bing, Fei Huang, Wei Lu, and Luo Si. 2020. Knowing what, how and why: A near complete solution for aspect-based sentiment analysis. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 8600-8607. AAAI Press. +Soujanya Poria, Erik Cambria, and Alexander F. Gelbukh. 2016. Aspect extraction for opinion mining with a deep convolutional neural network. Knowl. Based Syst., 108:42-49. +Guang Qiu, Bing Liu, Jiajun Bu, and Chun Chen. 2011. Opinion word expansion and target extraction through double propagation. Comput. Linguistics, 37(1):9-27. +Duyu Tang, Bing Qin, and Ting Liu. 2016. Aspect level sentiment classification with deep memory network. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 214-224. The Association for Computational Linguistics. + +Wenya Wang, Sinno Jialin Pan, Daniel Dahlmeier, and Xiaokui Xiao. 2016. Recursive neural conditional random fields for aspect-based sentiment analysis. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 616-626. The Association for Computational Linguistics. +Shengqiong Wu, Hao Fei, Yafeng Ren, Donghong Ji, and Jingye Li. 2021. Learn from syntax: Improving pair-wise aspect and opinion terms extraction with rich syntactic knowledge. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI 2021, Virtual Event / Montreal, Canada, 19-27 August 2021, pages 3957-3963. ijcai.org. +Zhen Wu, Chengcan Ying, Fei Zhao, Zhifang Fan, Xinyu Dai, and Rui Xia. 2020. Grid tagging scheme for aspect-oriented fine-grained opinion extraction. CoRR, abs/2010.04640. +Hu Xu, Bing Liu, Lei Shu, and Philip S. Yu. 2018. Double embeddings and cnn-based sequence labeling for aspect extraction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 2: Short Papers, pages 592-598. Association for Computational Linguistics. +Lu Xu, Yew Ken Chia, and Lidong Bing. 2021. Learning span-level interactions for aspect sentiment triplet extraction. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 4755-4766. Association for Computational Linguistics. +Lu Xu, Hao Li, Wei Lu, and Lidong Bing. 2020. Position-aware tagging for aspect sentiment triplet extraction. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 2339-2349. Association for Computational Linguistics. +Jianfei Yu, Jing Jiang, and Rui Xia. 2019. Global inference for aspect and opinion terms co-extraction based on multi-task neural networks. IEEE ACM Trans. Audio Speech Lang. Process., 27(1):168-177. +He Zhao, Longtao Huang, Rong Zhang, Quan Lu, and Hui Xue. 2020. Spanplt: A span-based multi-task learning framework for pair-wise aspect and opinion terms extraction. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 3239-3248. Association for Computational Linguistics. \ No newline at end of file diff --git a/aspanlevelbidirectionalnetworkforaspectsentimenttripletextraction/images.zip b/aspanlevelbidirectionalnetworkforaspectsentimenttripletextraction/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..5bb317fc02c09b74b2db876b90e56d5e2373732c --- /dev/null +++ b/aspanlevelbidirectionalnetworkforaspectsentimenttripletextraction/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7e356f4f07eb85ed210b0e3392f131ee407cc521a28c8191bb2c711c356eff1a +size 399191 diff --git a/aspanlevelbidirectionalnetworkforaspectsentimenttripletextraction/layout.json b/aspanlevelbidirectionalnetworkforaspectsentimenttripletextraction/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..cb4348e2230422679d39075a1c5229ac021fe93e --- /dev/null +++ b/aspanlevelbidirectionalnetworkforaspectsentimenttripletextraction/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0f712c0d525f1a8f116998f5fa36d4cdf37376aa71265e31e50361b10427bc06 +size 332305 diff --git a/aspeakerawarecoattentionframeworkformedicaldialogueinformationextraction/9fccba9b-6a92-4313-859c-7dc2a93a032a_content_list.json b/aspeakerawarecoattentionframeworkformedicaldialogueinformationextraction/9fccba9b-6a92-4313-859c-7dc2a93a032a_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..b87cac29e17708465dce47a89b9fee9dc1ed717e --- /dev/null +++ b/aspeakerawarecoattentionframeworkformedicaldialogueinformationextraction/9fccba9b-6a92-4313-859c-7dc2a93a032a_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9469d02a5f0395a6b2a605057ab060c39ab2f8b9b332a59ac9460664c1ca5b38 +size 77726 diff --git a/aspeakerawarecoattentionframeworkformedicaldialogueinformationextraction/9fccba9b-6a92-4313-859c-7dc2a93a032a_model.json b/aspeakerawarecoattentionframeworkformedicaldialogueinformationextraction/9fccba9b-6a92-4313-859c-7dc2a93a032a_model.json new file mode 100644 index 0000000000000000000000000000000000000000..24430f7e2db45ecca8e8a46256f46b5ef2cbec6a --- /dev/null +++ b/aspeakerawarecoattentionframeworkformedicaldialogueinformationextraction/9fccba9b-6a92-4313-859c-7dc2a93a032a_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a106ec6dc784e4d94e008ffe486bfd9be12afac980d1d04ac75ee16eaded455f +size 91874 diff --git a/aspeakerawarecoattentionframeworkformedicaldialogueinformationextraction/9fccba9b-6a92-4313-859c-7dc2a93a032a_origin.pdf b/aspeakerawarecoattentionframeworkformedicaldialogueinformationextraction/9fccba9b-6a92-4313-859c-7dc2a93a032a_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..7a31f2f49318cef618362a66d1c78bea8ad293fb --- /dev/null +++ b/aspeakerawarecoattentionframeworkformedicaldialogueinformationextraction/9fccba9b-6a92-4313-859c-7dc2a93a032a_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:72716be01da8bb9e6b603f0dd28d7eb729b10dcf1d8d843df129aa474ac1ac9f +size 400610 diff --git a/aspeakerawarecoattentionframeworkformedicaldialogueinformationextraction/full.md b/aspeakerawarecoattentionframeworkformedicaldialogueinformationextraction/full.md new file mode 100644 index 0000000000000000000000000000000000000000..124a551bb748818298ce3a8565bc999f651a2afd --- /dev/null +++ b/aspeakerawarecoattentionframeworkformedicaldialogueinformationextraction/full.md @@ -0,0 +1,383 @@ +# A Speaker-Aware Co-Attention Framework for Medical Dialogue Information Extraction + +Yuan Xia $^{1\dagger}$ , Zhenhui Shi $^{1\dagger}$ , Jingbo Zhou $^{1*}$ , Jiayu Xu $^{1}$ , Chao Lu $^{1}$ , Yehui Yang $^{1}$ , Lei Wang $^{1}$ , Haifeng Huang $^{1}$ , Xia Zhang $^{2}$ , Junwei Liu $^{1}$ + +1Baidu Inc., China. 2Neusoft Corporation, China. + +$^{1}\{xiayuan,shizhenhui,zhoujingbo,xujiayu03,luchao,yangyehui01,$ wanglei15,huanghaifeng,liujunwei} $@$ baidu.com, + +$^{2}$ zhangx@neusoft.com + +# Abstract + +With the development of medical digitization, the extraction and structuring of Electronic Medical Records (EMRs) have become challenging but fundamental tasks. How to accurately and automatically extract structured information from medical dialogues is especially difficult because the information needs to be inferred from complex interactions between the doctor and the patient. To this end, in this paper, we propose a speaker-aware coattention framework for medical dialogue information extraction. To better utilize the pretrained language representation model to perceive the semantics of the utterance and the candidate item, we develop a speaker-aware dialogue encoder with multi-task learning, which considers the speaker's identity into account. To deal with complex interactions between different utterances and the correlations between utterances and candidate items, we propose a co-attention fusion network to aggregate the utterance information. We evaluate our framework on the public medical dialogue extraction datasets to demonstrate the superiority of our method, which can outperform the state-of-the-art methods by a large margin. + +# 1 Introduction + +In the past decade, the collection and usage of Electronic Medical Records (EMRs) have been proved as one of the most important applications in the process of medical digitization. However, the recording and writing of the EMRs may bring a significant burden to doctors. Given the breakthrough advance of speech recognition technology, conversations between doctors and patients can be accurately recorded as text. However, such unstructured medical dialogue data cannot be easily utilized for medical research. How to automatically extract the structured information from these unstructured + +# Dialogue Window + +Patient: Doctor, could you please tell me is it a coronary disease? + +Doctor: Did you feel angina? + +Patient:No, I felt there is a sense of suppression in the chest. + +Doctor: The result of echocardiography is normal. Don't worry. + +Patient: OK, thanks, doctor. + +# Annotation Labels + +Coronary disease + +Patient-unk + +Angina + +Patient-neg + +Chest tightness + +Patient-pos + +Echocardiography + +Doctor-pos + +Utterance + +Candidate Item + +Stauts + +![](images/d86f9353dfdda0085a502a56bb1ff68ebd4e3082456616af0b65e2edcf9a385a.jpg) +Figure 1: An example of a patient-doctor dialogue and the corresponding annotated labels. + +textual medical dialogue data is an essential step to accelerate medical digitization. + +Compared with the general medical information extraction, the crucial challenge of the medical dialogue extraction is that it has to take the speaker's identity and utterance interactions into consideration. In conventional information extraction, a relation can largely be inferred by a sentence or a paragraph. However, in the medical dialogue extraction task, the candidate item and status information need to be detected and then verified by the conversations between the doctor and the patient. An example of a patient-doctor dialogue and the corresponding annotated labels is shown in Figure 1. For instance, the doctor asks the patient, "Did + +you feel angina?”, the patient responds, “No, I felt there is a sense of suppression in the chest,” the ground truth labels for correct extraction are (chest tightness: patient-positive), (angina: patient-negative). If only considering the utterance of the patient or the doctor alone, we cannot make correct information extraction. + +However, how to leverage the speaker's identity and utterance interactions information to facilitate medical information extraction is not well explored. Du et al. (2019) describe a novel model that extracts the symptoms mentioned in clinical conversations along with their status. The annotation of their status does not consider the speaker's identity into account. Lin et al. (2019) make symptom recognition and symptom inference in medical dialogues, and propose a global attention mechanism to capture symptom-related information. Zhang et al. (2020) develop a medical information extractor based on a simple deep matching module to take turn interaction into consideration. Thus, all existing methods fail to take the speaker into consideration, and the simple utterance combination method such as just concatenating all utterances together with flat attention cannot grasp sufficient information among utterance interactions in the medical dialogue. + +To tackle the above challenge, we propose a Speaker-aware co-Attention Framework for medical dialogue Extraction (name SAFE for short). First, to better predict the status of the candidate item in the medical dialogue, we should both consider the contextual information from the dialogue and be aware of the identity of the speaker. For the annotated label (echocardiography: doctor-positive) in the dialogue shown in Figure 1, being aware of the identity (patient or doctor) of the speaker can help make a correct inference. Second, we propose an utterance-based co-attention graph network to perceive complex correlations between different utterances. + +We summarize our contributions as follows: + +- We propose a new framework (SAFE) for medical dialogue extraction, which can better utilize the pre-trained language representation model to grasp the semantics of both utterances and candidate items. +- We develop a novel speaker-aware encoder and a co-attention fusion method with multitask learning and graph networks, which takes the speaker's identity and correlations be + +tween utterances and candidate items into consideration. + +- We evaluate our framework on the public medical dialogue datasets to demonstrate the superiority of our method, which can outperform the state-of-the-art methods by a large margin. + +# 2 Related Work + +# 2.1 Pre-trained Language Models + +Pre-trained language models, like BERT (Devlin et al., 2019), Roberta (Liu et al., 2019), XLNet (Yang et al., 2019), ERNIE (Sun et al., 2020), T5(Raffel et al., 2020), BART(Peng et al., 2021) and GPT3 (Brown et al., 2020), can achieve huge gains on many Natural Language Processing (NLP) tasks, such as GLUE (Wang et al., 2018) and SuperGLUE (Wang et al., 2019) benchmarks. In our proposed framework, we utilize the fine-tuned BERT model as the initial encoder to obtain the representations for the utterance and the candidate item. + +# 2.2 Medical Dialogue Extraction + +Extracting information from EMR texts has attracted much research attention in both NLP and biomedical domains (Xia et al., 2021). Du et al. (2019) propose a span-attribute tagging (SAT) model and a variant of the sequence-to-sequence model to solve the symptom tagging and extraction problems. Lin et al. (2019) present a global attention mechanism, which perceives the symptom-related information from both dialogues and corpus to improve the performance of symptom recognition and symptom inference. However, the above works mainly focus on the sequential labeling and medical name entity recognition (NER), and fail to consider the complex interaction between utterances. In industrial applications, Peng et al. (2021) propose a dialogue-based information extraction system that integrates existing NLP technologies for medical insurance assessment, while their motivation is to reduce the time cost of the insurance assessment procedure. + +The most similar work related to our study is (Zhang et al., 2020), which proposes a medical information extractor (MIE) by using an LSTM (Hochreiter and Schmidhuber, 1997) model as an encoder module, and then adopting an aggregate module to take the utterance interaction into consideration. Our study is different from (Zhang et al., 2020) in the following two points. On the one + +![](images/17a29f0ccabc277b244c747136e9f6618f21f6dcca98f888b9adc26257bdcb42.jpg) +Figure 2: The Illustration of the Speaker-Aware Co-Attention Framework for Medical Dialogue Extraction (SAFE). It includes a three-stage pipeline system: a Speaker-Aware Dialogue Encoder Module (SAE), a Co-Attention Fusion Module (CAF), and an Inference Module (IM). + +hand, we develop a multi-task learning method to train a speaker-aware dialogue encoder module that takes the speaker's information into consideration. On the other hand, we utilize a co-attention fusion mechanism to perceive complex interactions between different utterances and the correlation with the candidate item. + +# 3 Preliminaries + +In this section, we formally define the problem of medical dialogue extraction (MDE). For a dialogue with $n$ tokens and $m$ utterances, it can be defined as $D = (U_1^{r_1}, U_2^{r_2}, \dots, U_m^{r_m})$ , where $U_i^{r_i}$ is the $i$ -th utterance in the dialogue, $r_i \in \{0, 1\}$ , which indicates the speaker's identity (e.g. belongs to patient or doctor). The candidate item $I \in \mathcal{I}$ is a medical term (like symptom, disease, surgery, etc.) which can be extracted from a dialogue $D$ . For each candidate item $I$ , we also need to identify its status $S \in S$ where $S$ is an element from the set {patient-negative, patient-positive, patient-unknown, doctor-positive, doctor-negative} which indicates whether the candidate item is confirmed or denied by doctors and patients. + +Finally, we define the task of medical dialogue extraction as follows: given a medical dialogue $D \in \mathcal{D}$ , candidate item $I \in \mathcal{I}$ and its status $S \in S$ , the MDE can be formulated to predict the label $f: D \to \mathcal{Y}$ where $\mathcal{Y}$ is a matrix generated by Cartesian product of the candidate item $\mathcal{I}$ and its status $S$ , i.e. $\mathcal{Y} = (y_{ij}) \in \mathbb{R}^{|\mathcal{I}| \times |\mathcal{S}|}$ , and $y_{ij} = 1$ indicates that the medical dialogue $D$ contains the candidate $I_i$ with the status $S_j$ . Note that different from the task for relation extraction (RE), the label space for the MDE is very sparse, which causes it a more challenging problem. + +# 4 Method + +We develop a three-stage pipeline system: (1) Speaker-Aware Dialogue Encoder Module (SAE), a module to turn the utterances in the medical dialogue and the candidate item into node feature representations, which also takes the speaker identity into account; (2) Co-Attention Fusion Module (CAF), a module to involve the interactions between the utterances and the correlation between utterance and candidate item into consideration; and (3) Inference Module (IM), a module to utilize the fusion representation for final dialogue information extraction. The full pipeline of our proposed medical dialogue extraction framework is illustrated in Figure 2. + +# 4.1 Speaker-Aware Dialogue Encoder Module + +An effective medical dialogue encoder should capture the semantics of the utterance and perceive the speaker's identity. In this work, we designed a multi-task learning method to pre-train our speaker-aware dialogue encoder. Our dialogue encoder is pre-trained on a Speaker Recognition Task (SRT) and a Status Entailment Task (SET). For the SRT task, we design a speaker recognition task to distinguish the identity of the speaker. For the SET task, we leverage the pre-trained language model like BERT to train a status entailment task to perceive the semantics in the dialogue. In Figure 3, we illustrated the training process of our speaker-aware dialogue encoder module. + +# Speaker Recognition Task + +Given an utterance in a dialogue, if the encoder itself can be aware of whether the speaker is a patient or a doctor, it will help to infer the corre + +![](images/0271a64c79119253e6129e17434cddbe63ea5b10ae3ac1f4c14f92308157ecfa.jpg) +Figure 3: Illustration of the multi-task fine-tuning of Speaker-Aware Dialogue Encoder. + +sponding status for the candidate item. We pre-train the BERT-base encoder with the auxiliary speaker recognition task, which is designed to distinguish whether the utterance in the medical dialogue is spoken by the patient or by the doctor. The speaker recognition task is illustrated in the upper side of Figure 3. We construct the binary training samples from the medical dialogues corpus. The utterances from the patient are labeled as 1, and the utterances from the doctor are labeled as 0. We mask the word patient and doctor at the beginning of each utterance, which can prevent the model from distinguishing the speaker only with the beginning prompt words. + +First, we take the utterance $U^r$ into the BERT-base encoder to get the utterance representation $\mathbf{U}^{B}$ : + +$$ +\mathbf {U} _ {i} ^ {B} = \operatorname {E n c o d e r} ^ {\left(\text {B E R T}\right)} \left(U _ {i} ^ {r _ {i}}\right). \tag {1} +$$ + +Then, we fed the utterance representation into a binary classifier, which is imposed of a dense layer and a softmax layer. The speaker recognition probability is as follows: + +$$ +P (r _ {i} = 1 | U _ {i} ^ {r _ {i}}) = \mathrm {s o f t m a x} (\mathbf {W} _ {r} \mathbf {U} _ {i} ^ {B}), \qquad (2) +$$ + +where $\mathbf{W}_r\in \mathbb{R}^{2\times d}$ denotes weight matrix, $d$ is the number of hidden dimensions of the encoder. The loss function of the SRT for a single dialogue is as follows: + +$$ +\begin{array}{l} \mathcal {L} _ {S R T} = \frac {1}{M} \sum_ {i} - r _ {i} \log P \left(r _ {i} = 1 \mid U _ {i} ^ {r _ {i}}\right) \tag {3} \\ - (1 - r _ {i}) \mathrm {l o g} P (r _ {i} = 0 | U _ {i} ^ {r _ {i}}). \\ \end{array} +$$ + +where $M$ is the number of utterances in a dialogue, and $r_i$ is the label of the speaker. + +# Status Entailment Task + +We jointly pre-train the BERT encoder with another auxiliary status entailment task, which is designed + +to entail the status of the candidate item. The status entailment task is illustrated at the bottom of Figure 3. We re-formulate the medical dialogue information extraction into a status entailment task. Given a medical dialogue and the candidate item, we need to entail the status of the candidate item. The model should make an inference on the candidate's status conditioned on the dialogue and candidate item information. + +First, we concatenate all the utterances in a medical dialogue $D$ and the candidate item $I$ together, and fed them into the BERT-base encoder to get the dialogue representation $\mathbf{D}^B$ : + +$$ +\mathbf {D} ^ {B} = \operatorname {E n c o d e r} ^ {(\mathrm {B E R T})} (D, I). \tag {4} +$$ + +Then, we fed the dialogue representation into a multi-class (multi-status) classifier, which is also imposed of a dense layer and a softmax layer. The status entailment probability is as follows: + +$$ +P (y | D, I) = \mathrm {s o f t m a x} (\mathbf {W} _ {e} \mathbf {D} ^ {B}), \qquad (5) +$$ + +where $\mathbf{W}_e\in \mathbb{R}^{C\times d}$ denotes weight matrix, $d$ is the number of hidden dimensions of the encoder, $C$ is the number classes of the status. The loss function for the SET is as follows: + +$$ +\mathcal {L} _ {S E T} = \operatorname {C r o s s E n t r o p y} (y, P (y | D, I)). \tag {6} +$$ + +where $y$ is ground truth status label for candidate item in the dialogue, and CrossEntropy $(\cdot)$ is cross entropy loss function. + +# Joint Optimizing + +The final loss function for the speaker-aware encoder Encoder(SA) is as follows: + +$$ +\mathcal {L} _ {S A E} = \lambda \mathcal {L} _ {S R T} + (1 - \lambda) \mathcal {L} _ {S E T}. \tag {7} +$$ + +where $\mathcal{L}_{SRT}$ and $\mathcal{L}_{SET}$ are the losses for speaker recognition task and status entailment task, respectively, $\lambda$ is the hyper-parameter to control the weight of each task. + +# 4.2 Co-Attention Fusion Module + +Given the medical dialogue, we employ our pretrained speaker-aware encoder $\mathrm{Encoder}^{(\mathrm{SA})}$ as our utterance encoder by extracting the final hidden state of the [CLS] token as the representation, where [CLS] is the special classification embedding in our pre-trained model. In order to involve the correlation between the utterance and the candidate item, given $m$ utterances $(U_{1}^{r_{1}}, U_{2}^{r_{2}}, \dots, U_{m}^{r_{m}})$ in a dialogue and a candidate item $I$ , we feed each utterance-candidate item pair $(U_{i}^{r_{i}}, I)$ into our speaker-aware encoder to obtain the utterance representation $\mathbf{U}_{i}$ . We also feed the candidate item $I$ into the speaker-aware encoder alone to obtain the candidate item representation $\mathbf{I}$ : + +$$ +\begin{array}{l} \mathbf {U} _ {i} = \operatorname {E n c o d e r} _ {\left( \begin{array}{c} (\mathrm {S A}) \\ \end{array} \right), i} ^ {(\mathrm {S A})} \left(U _ {i} ^ {r _ {i}}, I\right), \tag {8} \\ \mathbf {I} = \operatorname {E n c o d e r} ^ {\mathrm {(S A)}} (I), \\ \end{array} +$$ + +To better capture complex interactions between utterances, we use a co-attention fusion mechanism to aggregate the utterance information. We treat each utterance as a node and define other utterances in the same sliding window as its neighbors. Then we calculate the attention coefficient between a node $i$ and its neighbor $j$ $(j\in \mathcal{N}_i)$ + +$$ +c _ {i j} = \mathbf {W} _ {1} ^ {u \rightarrow u} (\operatorname {R e L U} \left(\mathbf {W} _ {0} ^ {u \rightarrow u} (\operatorname {c o n c a t} \left(\mathbf {U} _ {i}, \mathbf {U} _ {j}\right))\right)), \tag {9} +$$ + +where $j\in \mathcal{N}_i$ is the in-window neighbors of the node $i$ , $\mathbf{W}_1^{u\to u}\in \mathbb{R}^{1\times w}$ and $\mathbf{W}_0^{u\to u}\in \mathbb{R}^{w\times 2d}$ are weight matrices, and $\mathrm{concat}(\cdot ,\cdot)$ is concatenation operation. $d$ is the number of dimensions of the utterance feature representation, $w$ is the number of dimensions of the intermediate hidden state. + +We use a softmax function to normalize the utterance-utterance co-attention coefficients $\phi$ , + +$$ +\phi_ {i j} = \operatorname {s o f t m a x} \left(c _ {i j}\right) = \frac {\exp \left(c _ {i j}\right)}{\sum_ {k \in \mathcal {N} _ {i}} \exp \left(c _ {i k}\right)}. \tag {10} +$$ + +Then, given the utterance-utterance co-attention matrix $\phi_{ij}$ , inspired by (Kipf and Welling, 2017; Velicković et al., 2018; Zhou et al., 2019), we employ a simple GCN layer for information fusion. + +$$ +\widetilde {\mathbf {U}} _ {i} ^ {(l)} = \sigma \left(\sum_ {j = 1} ^ {n} \phi_ {i j} \mathbf {W} _ {\phi} ^ {(l)} \widetilde {\mathbf {U}} _ {i} ^ {(l - 1)}\right), \tag {11} +$$ + +where $\widetilde{\mathbf{U}}_i^{(0)}$ is initialized with $\mathbf{U}_i$ , $\mathbf{W}_{\phi}^{(l)} \in \mathbb{R}^{d \times d}$ , $l$ is the number of layers for propagation. + +We also explicitly involve the correlation between the utterance $\widetilde{\mathbf{U}}_i^{(l)}$ and the candidate item $\mathbf{I}$ by another co-attention layer: + +$$ +p _ {i} = \mathbf {W} _ {1} ^ {u \rightarrow c} (\operatorname {R e L U} \left(\mathbf {W} _ {0} ^ {u \rightarrow c} \left(\operatorname {c o n c a t} \left(\widetilde {\mathbf {U}} _ {i} ^ {(l)}, \mathbf {I}\right)\right)\right)), \tag {12} +$$ + +where $\mathbf{W}_1^{u\to c}\in \mathbb{R}^{1\times w}$ and $\mathbf{W}_0^{u\to c}\in \mathbb{R}^{w\times 2d}$ are weight matrices. + +Similarly, we adopt a softmax function to normalize the utterance-candidate item co-attention coefficients $\psi$ , + +$$ +\psi_ {i} = \operatorname {s o f t m a x} \left(p _ {j}\right) = \frac {\exp \left(p _ {i}\right)}{\sum_ {k = 1} ^ {N} \exp \left(p _ {k}\right)}, \tag {13} +$$ + +Finally, the normalized co-attention coefficients are used to compute a linear combination of utterance features of neighbors for final information extraction: + +$$ +\mathbf {T} _ {F} = \operatorname {C o A t t n} (D, I) = \sum_ {k = 1} ^ {N} \psi_ {k} \widetilde {\mathbf {U}} _ {k} ^ {(l)}. \tag {14} +$$ + +# 4.3 Inference Module + +The output representation $\mathbf{T}_F$ of the Co-Attention Fusion module (CAF) is then fed into the final inference module to extract the medical information from the dialogue. + +$$ +\tilde {y} _ {c} = \operatorname {s o f t m a x} \left(\mathbf {W} _ {o} \mathbf {T} _ {F} ^ {(c)} + \mathbf {b} _ {o}\right), \tag {15} +$$ + +where $\mathbf{T}_F^{(c)}$ is the $c$ -th index of the candidate item, $\mathbf{W}_o \in \mathbb{R}^{C \times d}$ and $\mathbf{b}_o \in \mathbb{R}^{C \times 1}$ are weight matrix and bias, respectively. $\tilde{y}_c$ is the predicted probability of the candidate item's status, $y_c$ is the ground-truth label. + +The final loss function is as follows: + +$$ +\mathcal {L} = \frac {1}{N C} \sum_ {i} \sum_ {c} y _ {c} ^ {(i)} \log \tilde {y} _ {c} ^ {(i)}. \tag {16} +$$ + +where $N$ is number of dialogues in the training corpus, $C$ is the number of classes for candidate item status. + +# 5 Experiments + +# 5.1 Datasets + +To verify the effectiveness of our SAFE framework, we conduct extensive experimental evaluations on the Medical Information Extraction MIE dataset1 + +(Zhang et al., 2020). The dataset involves doctor-patient dialogues collected from a Chinese medical consultation website ${}^{2}$ . The MIE dataset is representative for medical dialogue task from EMR. On the one hand, the dialogues from the MIE dataset are collected from real doctor-patient conversations, it can reflect the data characteristics from EMRs. On the other hand, for industrial applications, the problem of extracting and structuring of EMRs raised by the MIE dataset has become a fundamental task in downstream medical applications, such as text-based dialogue systems or cascaded with ASR (Automatic Speech Recognition) systems. + +In the MIE dataset, the dialogues are already in text format. As the dialogues turn to be too long, the medical dialogues are processed into pieces using a sliding window. A window consists of multiple consecutive turns of a dialogue. The sliding window size is set to 5, because this size allows the included dialogue turns contain proper amount of information. For windows with less than 5 utterances, the dataset pads them at the beginning with empty strings. Then, it uses a window-to-information annotation method, and annotates the candidate item and its status in each window in the dialogue. Annotators of the MIE dataset are guided by two physicians to ensure the correctness and the cohen's kappa coefficient of the labeled data is 0.91. It defines four categories (i.e. symptom, surgery, test, and other information) and 71 candidate items which are frequent items in doctor-patient dialogues and are fixed in the MIE dataset. The candidate item has five statuses (i.e. patient-pos, patient-neg, doctor-pos, doctor-neg, patient-unknown). In total, the corpus has 1,120 dialogues and 18,212 windows. For the dialogue-level, the dataset is split into three parts: training, validation and testing, and the sizes are 800, 160, and 160, respectively; for the window-level, the corresponding sizes are 12,932, 2,587, and 4,254, respectively. The detailed annotation statistics of the MIE dataset are shown in Table 1. + +# 5.2 Evaluation Metrics + +For the MIE dataest, we evaluate the extracted medical dialogue information results with Precision, Recall and F1-Score. In accordance with the evaluation metrics described in the (Zhang et al., 2020), a correct result should both correctly predict the candidate item and its status. The results are evalu + +
TrainDevTest
# Window-level12,9322,5874,254
Avg. words of windows110.8113.3109.7
Avg. annotations of windows2.52.72.4
# Dialogue-level800160160
Avg. words of dialogues404.4434.7401.3
Avg. annotations of dialogues6.57.26.4
+ +Table 1: The detailed annotation statistics of the MIE dataset. + +ated in window-level and dialogue-level as follows: + +- Window-level. The evaluation is calculated with each segmented window, and report the micro-average of all the test windows. +- Dialogue-level. First, we merge the results of windows belonging to the same conversation. For mutually exclusive status, we update the previous status with the latest status. Then, we evaluate the results of each dialogue and report the micro-average of all test dialogues. + +# 5.3 Experiment Settings + +# Task Training Settings + +For the speaker recognition task, the label of the speaker in each utterance is generated by the beginning prompt words (e.g. patient: or doctor): In the training stage, we mask the beginning prompt words to prevent the leakage of labels. For the status entailment task, in addition to the origin status labels (e.g. patient-pos), we add the None status label as the negative label. Because the candidate item is not provided in the inference stage, thus we have to traverse the candidate item space to make a prediction. For a given dialogue and the provided candidate item-status pair information, suppose there are $B$ candidate items labels presented in a dialogue, we randomly select $N_{s} \times B$ items which are not presented in the ground-truth candidate items and label them with the None status. In our experiments, we set $N_{s}$ as 2. In the inference stage, we make prediction on the whole candidate item space. Only the candidate item with non-None status is left for final evaluation. + +# Hyperparameter Settings + +For the speaker-aware dialogue encoder module, we use a BERT-base network structure to initialize the base dialogue encoder. The BERT-base (110M) namodel has 12 layers, the number of hidden state dimensions is set to 768, and the number of heads + +
MethodWindow-LevelDialogue Level
PrecisionRecallF1-ScorePrecisionRecallF1-Score
LSTM-Classifier53.1349.4650.6961.3452.6556.08
MIE-Single (Zhang et al., 2020)69.4064.4765.1875.3763.1767.27
MIE-Multi (Zhang et al., 2020)70.2464.9666.4076.8364.0769.28
MIE-Multi (BERT)71.4571.1771.3171.0174.4672.69
SAFE (Ours)72.59*73.86*73.22*73.2078.71*75.86*
+ +is set to 12. We use the Adam optimizer (Kingma and Ba, 2015) with a batch size of 32 for 20 epochs. The learning rate $\alpha_{s}$ for SAE pre-training is set to 2e-5. The warmup proportion is set to 0.1. The maximum sequence length is 512. The $\lambda$ for controlling the task weight is set to 0.5 with grid search strategy. For the co-attention fusion module, the number of hidden dimensions of the dense layer is set to 64, and the number of layers for utterance propagation is set to 2. The final inference module is trained to minimize the cross-entropy loss on the predicted label using the Adam optimizer with a batch size of 128 for 15 epochs, and the initial learning rate $\alpha_{c}$ for co-attention fusion method is set to 1e-3. The models are trained on the NVIDIA Tesla V100 32GB GPU with 4 hours. + +# 5.4 Model Comparisons + +In this section, we compare our proposed framework with several baselines to verify the effectiveness of our approach. + +- LSTM-Classifier The model only uses the LSTM encoder to get the representation of the concatenation of each utterance and uses a self-attention layer and an MLP layer to make predictions. +- MIE-Single (Zhang et al., 2020) The model uses the LSTM model as the encoder module, and only consider the interaction within a single utterance. +- MIE-Multi (Zhang et al., 2020) The model uses the LSTM model as an encoder module and proposes a simple aggregate module to take the utterance interaction into consideration. +- MIE-Multi (BERT) The model architecture is the same with the MIE-Multi, except that we replace the original LSTM encoder with the BERT encoder. + +Table 2: Performance comparisons with different baseline models on the MIE dataset. Significant test over the best baseline results are marked with * (pair-wised t-test, $p < {0.01}$ ). + +
MethodPrecisionRecallF1-Score
SAFE73.2078.7175.86
w/o (SAE)68.7178.5173.29
w/o (CAF)69.4674.5571.91
+ +Table 3: The ablation study on the MIE dataset with dialogue-level metrics. + +
CAF LayersPrecisionRecallF1-Score
171.9179.1075.34
273.2078.7175.86
371.5076.5373.93
+ +Table 4: Performance of different number of coattention layers in the CAF module on the MIE dataset with dialogue-level metrics. + +- SAFE (Ours) Our speaker-aware co-attention framework takes the speaker's identity and the correlations between utterances and candidate items into consideration. + +# 5.5 Main Results + +In accordance with the evaluation metrics introduced by Zhang et al. (2020), we report both window-level and dialogue-level results. Table 2 shows the performance comparisons with different methods on the MIE dataset. We observe that the LSTM-Classifier performs the worst, under the dialogue-level metrics. The LSTM-Classifier only has a precision of 61.34 and a recall of 52.65, because it fails to consider interactions between each utterance. The performance of the MIE-Multi is better than the MIE-Single, as the latter model takes the turn interactions into account. The MIE-Multi achieves better performance at a precision of 76.83 and a recall of 64.07 under the dialogue-level metrics. The MIE-Multi is a state-of-art framework for medical dialogue extraction. However, without taking the speaker's identity into consideration, the MIE-Multi cannot tackle complex interactions between utterances and candidate items, it perform + +![](images/6111423ccd54471ab0b984b7e050befcaea395b16fb760abff3adda0bcf1efa8.jpg) +Figure 4: An case study on a patient-doctor dialogue in the test set. The corresponding utterance-utterance co-attention matrix is shown in Figure 5. + +less effectively compared to our SAFE framework. + +For a more fair comparison, to eliminate the performance boost brought by pre-trained language models like BERT, we re-implement the MIE-Multi with a BERT-based structure, the MIE-Multi (BERT) gets an F1-Score of 71.31 under window-level metrics and an F1-Score of 72.69 under the dialogue-level metrics, which is better than the original MIE-Multi, while still getting worse results compare to our method. Our SAFE framework achieves the state-of-the-art F1-Score of 75.86 which demonstrates the superiority of our method by a large margin. + +# 5.6 Ablation Study + +We conduct ablation studies on the MIE dataset to analyze the contribution of each component of our proposed SAFE model. The main results are shown in Table 3. + +# Effectiveness of Speaker-Aware Encoder + +First, we evaluate the effect of the speaker-aware encoder module. The removal of the SAE module causes the overall performance of the F1-score to decline from 75.86 to 73.29 under the dialogue-level metrics, which suggests that taking the speaker's information into account can help improve the dialogue extraction performance. Additionally, to quantitatively demonstrate that the SAE module can identify the speaker better, we calculate the speaker misidentification error rate in the test set, which indicates how many bad cases are owing to the error of speaker identity (e.g., pred: doctor-pos, label: patient-pos). The speaker misidentifi + +![](images/a0003886835dd7ea1f07fef67f9f817a615a8b9a06e2929ccbef50e0359a3e9c.jpg) +Figure 5: The co-attention matrix for the case in Figure 4. The attention map indicates the utterance-utterance interaction between different speakers. + +cation error rate is decreased from $5.0\%$ to $4.1\%$ compared to the method without the SAF module. + +# Effectiveness of Co-Attention Fusion Module + +Second, we evaluate the effect of the co-attention fusion module. Removing the CAF module reduces the overall performance of the F1-score by $5.49\%$ (from 75.86 to 71.91) under the dialogue-level metrics, which proves that adopting the co-attention graph network to capture the complex interactions between the utterances is significant for medical dialogue extraction. We also analyze the effect of the different number of co-attention layers on the performance of medical dialogue extraction. The results are shown in Table 4. Note that when the co-attention layer is equal to 1, the CAF is equivalent to the flat attention over utterances. We can discover from the table that the model with two co-attention layers achieves the best result, which indicates that the proper propagation of each utterance can help to perceive complex interactions in the medical dialogue. + +# 5.7 Case Study + +In previous sections, we provide a quantitative analysis of the experiment results. In this section, to help better understand that our SAFE framework can better capture utterance interactions in the dialogue, we provide a case study from the test set. + +Figure 4 shows a case study on a patient-doctor dialogue in the test set. To illustrate how our coattention fusion module can capture interactions between each speaker and the correlation with the candidate item, we visualize the utterance-utterance + +interaction with an attention map. From the Figure 5, we can find that the third column (Doctor: Do you have a fever?) and the fourth column (Patient: No, everything is OK.) of the matrix have dominantly higher values, because these two utterances are important for the model to extract the annotated label (fever: patient-negative). We can also discover that the co-attention coefficients (i.e. $\phi_{2,3}$ and $\phi_{3,2}$ ) of these two utterances are also very high, because we need to consider the interactions between these two utterances to infer the ground-truth status as patient-negative. + +# 5.8 Discussion + +Here we would like to give a brief discussion about how the proposed system connects with clinical practice. For text-based systems, the structured information from the text-based dialogues can be extracted to form the medical knowledge graph, which would benefit primary doctors. The structured information from medical dialogues can also bring benefits for many clinical applications, such as automatic diagnosis systems (Liu et al., 2018; Xu et al., 2019; Xia et al., 2020) and clinical decision support systems to assist doctors. For ASR systems, it is also important to utilize the speaker identity recognition in the system to facilitate medical information extraction after speech recognition. + +# 6 Conclusion + +In this paper, we propose a speaker-aware coattention framework for medical dialogue information extraction. We design a speaker-aware dialogue encoder module, which considers the speaker's identity into account and can better utilize the pre-trained language model to capture the semantics of the utterance and the candidate item. Moreover, we propose a co-attention fusion network to aggregate the utterance information, which tackles complex interactions between different utterances and the correlation between utterances and candidate items. The experiment results demonstrate the effectiveness of the proposed framework. + +# 7 Limitations + +While perceiving the speaker's identity and complex utterance interactions is essential for medical dialogue information extraction, the limitation of our work is that we do not explicitly involve the prior medical knowledge such as the existing medical knowledge graph (MKG) to further improve + +the overall performance with less annotated labels. To deal with the limitation, in the future, we should leverage the medical entity relations in the medical knowledge graph, and introduce the medical knowledge enhanced pre-train language model into our work to further improve the results of medical dialogue information extraction. + +# 8 Ethical Considerations + +It should be mentioned that the doctor-patient dialogues in the MIE dataset are collected from the openly accessible online health forum Chunyu-Doctor whose owners make such information visible to the public. All the patients' information has been anonymized. Apart from the personal information de-identified by the Chunyu-Doctor forum officially, we manually reviewed the collected data to prevent privacy leaks. We ensure there is no identifiable or offensive information in the experimental dataset. + +The model and framework proposed in this paper are for research purposes only and intended to facilitate studies of using NLP methods to better extract the structure information from medical dialogues, which can alleviate the doctor's burdens for recording EMRs and accelerate the development of medical digitization. + +# Acknowledgement + +Our work is supported by the National Key Research and Development Program of China No.2020AAA0109400. + +# References + +Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877-1901. Curran Associates, Inc. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In ACL, pages 4171-4186. + +Nan Du, Kai Chen, Anjuli Kannan, Linh Tran, Yuhui Chen, and Izhak Shafran. 2019. Extracting symptoms and their status from clinical conversations. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 915-925, Florence, Italy. Association for Computational Linguistics. +Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780. +Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. +Thomas N. Kipf and Max Welling. 2017. Semi-supervised classification with graph convolutional networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulouse, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net. +Xinzhu Lin, Xiahui He, Qin Chen, Huaixiao Tou, Zhongyu Wei, and Ting Chen. 2019. Enhancing dialogue symptom diagnosis with global attention and symptom graph. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5033-5042, Hong Kong, China. Association for Computational Linguistics. +Qianlong Liu, Zhongyu Wei, Baolin Peng, Huaixiao Tou, Ting Chen, Xuanjing Huang, Kam-Fai Wong, and Xiangying Dai. 2018. Task-oriented dialogue system for automatic diagnosis. In ACL, volume 2, pages 201-207. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. +Shuang Peng, Mengdi Zhou, Minghui Yang, Haitao Mi, Shaosheng Cao, Zujie Wen, Teng Xu, Hongbin Wang, and Lei Liu. 2021. A dialogue-based information extraction system for medical insurance assessment. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 654-663, Online. Association for Computational Linguistics. +Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140):1-67. +Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Hao Tian, Hua Wu, and Haifeng Wang. 2020. Ernie 2.0: A continual pre-training framework for language understanding. AAAI, pages 8968-8975. + +Petar Velicković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2018. Graph Attention Networks. International Conference on Learning Representations. +Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019. Superglue: A stickier benchmark for general-purpose language understanding systems. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc. +Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353-355, Brussels, Belgium. Association for Computational Linguistics. +Yuan Xia, Chunyu Wang, Zhenhui Shi, Jingbo Zhou, Chao Lu, Haifeng Huang, and Hui Xiong. 2021. Medical entity relation verification with large-scale machine reading comprehension. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pages 3765-3774. +Yuan Xia, Jingbo Zhou, Zhenhui Shi, Chao Lu, and Haifeng Huang. 2020. Generative adversarial regularized mutual information policy gradient framework for automatic diagnosis. Proceedings of the AAAI Conference on Artificial Intelligence, 34(01):1062-1069. +Lin Xu, Qixian Zhou, Ke Gong, Xiaodan Liang, Jianheng Tang, and Liang Lin. 2019. End-to-end knowledge-routed relational dialogue system for automatic diagnosis. In AAAI. +Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In NIPS, pages 5754-5764. +Yuanzhe Zhang, Zhongtao Jiang, Tao Zhang, Shiwan Liu, Jiarun Cao, Kang Liu, Shengping Liu, and Jun Zhao. 2020. MIE: A medical information extractor towards medical dialogues. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6460-6469, Online. Association for Computational Linguistics. +Jie Zhou, Xu Han, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and Maosong Sun. 2019. GEAR: Graph-based evidence aggregating and reasoning for fact verification. In ACL, pages 892-901. \ No newline at end of file diff --git a/aspeakerawarecoattentionframeworkformedicaldialogueinformationextraction/images.zip b/aspeakerawarecoattentionframeworkformedicaldialogueinformationextraction/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..12a8d859bf9b957dc27b0bb9e62bc8d0b428e9a8 --- /dev/null +++ b/aspeakerawarecoattentionframeworkformedicaldialogueinformationextraction/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5bfdec8199f52a3e57d01e4f1c21fdfb95caaab5b20be25820d649153234a44f +size 359417 diff --git a/aspeakerawarecoattentionframeworkformedicaldialogueinformationextraction/layout.json b/aspeakerawarecoattentionframeworkformedicaldialogueinformationextraction/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..fa1ee72c76fbb8e9f17c2a48426f75bb3294c992 --- /dev/null +++ b/aspeakerawarecoattentionframeworkformedicaldialogueinformationextraction/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a53a92737dac38cd996a5aaf0e99850ef44cdcdf3758f271009bd4e791037afa +size 387442 diff --git a/asqafactoidquestionsmeetlongformanswers/e3fcb04b-b484-455a-aaaa-bf7365841c71_content_list.json b/asqafactoidquestionsmeetlongformanswers/e3fcb04b-b484-455a-aaaa-bf7365841c71_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..49cfbe3e70d6b2a1cf2d63ad4476fa7ea972cd0a --- /dev/null +++ b/asqafactoidquestionsmeetlongformanswers/e3fcb04b-b484-455a-aaaa-bf7365841c71_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5e77ed319d130462c93945993ddf474365e33ccd8f7210a52effefc2b655435c +size 104726 diff --git a/asqafactoidquestionsmeetlongformanswers/e3fcb04b-b484-455a-aaaa-bf7365841c71_model.json b/asqafactoidquestionsmeetlongformanswers/e3fcb04b-b484-455a-aaaa-bf7365841c71_model.json new file mode 100644 index 0000000000000000000000000000000000000000..54aa6bfb1738a972879bb6d014d5bb5cc6e067f7 --- /dev/null +++ b/asqafactoidquestionsmeetlongformanswers/e3fcb04b-b484-455a-aaaa-bf7365841c71_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dce7047c4ae018b4a7988682621f0281ee4bac763126b383f598b60c9982a1d0 +size 125989 diff --git a/asqafactoidquestionsmeetlongformanswers/e3fcb04b-b484-455a-aaaa-bf7365841c71_origin.pdf b/asqafactoidquestionsmeetlongformanswers/e3fcb04b-b484-455a-aaaa-bf7365841c71_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..bda7f8ffe99a9006bf0e3fd56958dee769a870d6 --- /dev/null +++ b/asqafactoidquestionsmeetlongformanswers/e3fcb04b-b484-455a-aaaa-bf7365841c71_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2cf49a55b10b8aab2671fb4d5756766d731ab30285bbe7ab2e684602740a3f7a +size 987545 diff --git a/asqafactoidquestionsmeetlongformanswers/full.md b/asqafactoidquestionsmeetlongformanswers/full.md new file mode 100644 index 0000000000000000000000000000000000000000..f08a7dd36d20f3c5b895c81c1eb3cd2dbd26f73b --- /dev/null +++ b/asqafactoidquestionsmeetlongformanswers/full.md @@ -0,0 +1,418 @@ +# ASQA: Factoid Questions Meet Long-Form Answers + +Ivan Stelmakh $^{1*}$ Yi Luan $^{3}$ + +Bhuwan Dhingra2,3 Ming-Wei Chang3 + +$^{1}$ Yakov & Partners $^{2}$ Duke University $^{3}$ Google Research + +stelmakh95@icloud.com + +{luanyi,bdhingra,mingweichang}@google.com + +# Abstract + +An abundance of datasets and availability of reliable evaluation metrics have resulted in strong progress in factoid question answering (QA). This progress, however, does not easily transfer to the task of long-form QA, where the goal is to answer questions that require in-depth explanations. The hurdles include (i) a lack of high-quality data, and (ii) the absence of a well-defined notion of the answer's quality. In this work, we address these problems by (i) releasing a novel dataset and a task that we call ASQA (Answer Summaries for Questions which are Ambiguous); and (ii) proposing a reliable metric for measuring performance on ASQA. Our task focuses on factoid questions that are ambiguous, that is, have different correct answers depending on interpretation. Answers to ambiguous questions should synthesize factual information from multiple sources into a long-form summary that resolves the ambiguity. In contrast to existing long-form QA tasks (such as ELI5), ASQA admits a clear notion of correctness: a user faced with a good summary should be able to answer different interpretations of the original ambiguous question. We use this notion of correctness to define an automated metric of performance for ASQA. Our analysis demonstrates an agreement between this metric and human judgments, and reveals a considerable gap between human performance and strong baselines. + +# 1 Introduction + +In the last few years, the factoid question answering (QA) task—extracting short answers to factoid questions—has witnessed significant progress (Lee et al., 2019; Guu et al., 2020; Karpukhin et al., 2020; Lewis et al., 2020; Izacard and Grave, 2021). The progress was achieved in large part thanks to (i) the availability of high-quality datasets (Voorhees and Tice, 2000; Joshi et al., 2017; Yang et al., 2018; Abujabal et al., 2019; Kwiatkowski et al., 2019), + +and (ii) a well-defined notion of correctness. A key challenge for ongoing research now lies in long-form question answering where the goal is to generate detailed explanations in response to questions that require elaborate and in-depth answers. + +There is much less data available for the task of long-form QA. One of the primary data sources is the ELI5 dataset (Fan et al., 2019) that pairs open-ended questions with paragraph-long answers written by users of the "Explain Like I'm Five" Reddit forum. However, questions in ELI5 are very general (e.g., "How can different animals perceive different colors?") and can be answered in myriad different ways, making it hard to define objective criteria for a good answer. As a result, Krishna et al. (2021) identify several hurdles in using this data towards meaningful modeling progress, including a lack of reliable evaluation metrics. + +In this work, we address the lack of data sources and unreliability of evaluations by constructing a long-form QA dataset for factoid questions. Our paper is motivated by the work of Min et al. (2020) who observe that more than half of the factoid questions that occur naturally are ambiguous. For example, a seemingly simple question: "Who was the ruler of France in 1830?" is ambiguous because there were two rulers of France in 1830. Min et al. (2020) collected the AMBIGQA dataset that connects ambiguous factoid questions with disambiguations: pairs of disambiguated questions and unique short answers to these questions (see example on the right side of Figure 1). + +We note, however, that ambiguous questions often arise when a user lacks background knowledge about why there might be multiple answers to their question, and how those answers relate to each other. Thus, the list of disambiguations may not be satisfactory for the user. For example, the fact that in 1830 the ruler of France changed due to the revolution is highly salient but is not captured in + +![](images/95727f5e7299e9838befb823b14f7588a1c47f6573623972cf3cb7f895e18337.jpg) +Figure 1: The input questions in ASQA are sourced from AMBIGQA. Long-form answers must be sufficient to answer disambiguated questions from AMBIGQA (short answers are marked in blue and green), and should introduce additional knowledge from Wikipedia (highlighted in red) to resolve ambiguity and clarify the relationship between different short answers. The DR score we propose combines ROUGE and Disambiguation-accuracy (Disambig-Acc) metrics, overcoming the issues with long-form QA evaluation outlined by Krishna et al. (2021). + +the AMBIGQA disambiguations. + +In this paper, we argue the importance of generating long-form answers to ambiguous factoid questions. In that, we present ASQA (Answer Summaries for Questions which are Ambiguous)—a novel dataset that pairs each ambiguous question from AMBIGQA with a crowdsourced long-form answer. The answers we collect aim to (i) explain the source of ambiguity in the question, and (ii) connect all the valid short answers into a coherent passage. An example ASQA instance is shown in Figure 1. + +The main feature of ASQA is a combination of (i) a well-defined notion of correctness pertinent to factoid QA and (ii) the complexity of long-form QA. First, observe that a good answer to an ambiguous question should be sufficient for the user to answer different interpretations of the question. This observation induces a notion of correctness that is conceptually similar to the conventional accuracy in factoid QA. Second, to answer an ambiguous question, a system needs to retrieve a diverse set of documents that talk about different interpretations of the question and synthesize this information into a coherent summary. Thus, the key challenges of long-form QA—precise retrieval and high-quality summarization—are present in ASQA. + +Contributions Overall, our work makes several contributions: + +- First, we carefully develop a crowdsourcing + +pipeline and collect ASQA—a dataset of high-quality long-form answers to 6,316 ambiguous factoid questions. + +- Second, we design principled evaluation procedures for ASQA: (i) we propose a novel automated evaluation metric (DR) that combines the correctness aspect of factoid QA and the fluency aspect of long-form QA; (ii) we develop and release a convenient interface for human evaluations; (iii) we conduct a small-scale human study that shows a high agreement between our automated metric DR and human judgments. +- Third, we establish strong baselines for our task by combining joint passage retrieval (Min et al., 2021) and T5-large (Raffel et al., 2019). Our extensive evaluations demonstrate that there is a large gap between the baselines and human performance. Additionally, we highlight areas of improvement for future research on ASQA. + +# 2 Related Work + +In this section, we describe relevant works that propose new tasks, datasets, and methods for QA and summarization problems. + +Extractive QA Much of the existing work on question answering, including reading comprehension (Rajpurkar et al., 2016, 2018; Trischler et al., 2017; Yang et al., 2018), open-domain QA (Kwiatkowski et al., 2019; Joshi et al., 2017) and dialog-based QA (Choi et al., 2018), assumes that questions have unique answers. Min et al. (2020) relax this assumption and propose a task that aims at identifying all possible short answers to the + +ambiguous subset of the open-domain version of the NQ dataset, denoted NQ-OPEN (Kwiatkowski et al., 2019; Lee et al., 2019). The AMBIGQA dataset constructed by Min et al. (2020) serves as a building block of the present work and we provide more details on this dataset in Section 3. Another related effort is the CONDITIONALQA task (Sun et al., 2021) that requires systems to identify conditions under which the extracted answers are valid. Unlike the ASQA task, the answers in CONDITIONALQA come from a document provided in advance and do not need to be summarized into a single response. + +Generative QA Extractive models achieve good results when the answer to the question is readily available on the web. However, in many settings, including ambiguous factoid questions, a system needs to combine information from many (unknown) sources to present the answer to the user in a convenient way. Hence, in this work, we focus on the generative QA setting where a model needs to generate a textual answer rather than extract it. + +Datasets for generative QA include NARRATIVEQA (Kocisky et al., 2018) and CoQA (Reddy et al., 2019), but the average answer length in these datasets is small: 4.7 and 2.7 tokens, respectively. The MS MARCO Natural Language Generation (MS-NLG) dataset by Nguyen et al. (2016) combines both extractive and generative tasks and contains slightly longer human-generated answers (usually, a sentence-long) that can be read by a smart assistant. Fan et al. (2019) proposed a more challenging task of answering open-ended (e.g., "why?") questions. They scraped the "Explain Like I'm Five" Reddit forum and released a dataset of $\sim 272\mathrm{K}$ questions, where each question is supplied with several paragraph-long answers generated by the Reddit users. We overview the differences between ASQA, ELI5 and MS-NLG in Section 3.3. + +Recently, large language models such as GPT-3 (Brown et al., 2020) have been successfully applied to the task of long-form QA using the ELI5 dataset (Nakano et al., 2021). For this, a two-step human-in-the-loop approach was involved: first, demonstrations of annotators navigating the web to write answers were collected; second, a reward model (Stiennon et al., 2020) was trained by manual pairwise comparisons of answers. In ASQA, relevant passages for the answer are already provided by the annotators and we show that the pro + +posed DR score correlates well with the human judgment of answer quality. Using this automated metric in place of the reward model in the approach of Nakano et al. (2021) is a potential direction for future work. + +Summarization Given a set of documents relevant to the question (either ground truth or obtained using retrieval) the problem of generating a long-form answer reduces to query-based multi-document summarization. A small-scale dataset for this task was introduced as part of the DUC tasks (Dang, 2005). Recent work on building large-scale datasets has instead focused either on query-based summarization from a single document (Nema et al., 2017; Zhong et al., 2021) or on multi-document summarization without queries (Liu et al., 2018; Fabbri et al., 2019). In addition to the QA task, the ASQA dataset is suitable for the evaluation of systems' accuracy in the summarization setting, where the ground-truth passages containing the relevant information are assumed to be given. + +QA-Based Evaluation Prior work has looked at using question answering techniques to evaluate factual consistency in summarization (Wang et al., 2020; Durmus et al., 2020) and dialogue (Honovich et al., 2021). These works automatically generate questions from the system-produced text and search for answers in some reference text (e.g., the input being summarized) to evaluate the quality of the output. Instead, to evaluate generated long-form answers to ambiguous questions, in ASQA we use questions created by AMBIGQA annotators. + +# 3 ASQA Task and Data + +In this section, we introduce the ASQA task and the underlying data-collection process. The ASQA task is illustrated in Figure 1. The goal of the task is to write a comprehensive paragraph-long answer $\hat{a}$ to a given ambiguous question $q$ . + +Source Data We build ASQA on top of the subset of ambiguous questions identified in the AMBIGQA dataset. Out of a total of 14,042 AMBIGQA questions, 7,207 are identified as ambiguous by at least one AMBIGQA annotator. Each of these ambiguous questions $q$ is paired with a list of $n$ disambiguations $\{(x_i, y_i)\}_{i=1}^n$ , where $x_i$ denotes a disambiguated question and $y_i$ denotes + +![](images/73ebb6cebd52f10cbc5720fb827ca6d40224ed23058f2f56aef901258c7c8994.jpg) +Figure 2: Schematic representation of the annotation interface. + +![](images/a133c9e0635723544ffd7d886f8875aa0d8c8bc6280e7c70a727d4397e1adbc0.jpg) + +a unique short answer to $x_{i}$ . The number of disambiguations ranges from 2 to 46 per ambiguous question. To ensure that it is feasible to put all this information into a coherent story, we remove 417 questions with more than six disambiguations from consideration, thereby focusing on 6,790 AMBIGQA instances that we use as a starting point for building our task. + +# 3.1 ASQA Annotation Objectives + +At a high level, the goal of the annotation process is to obtain high-quality long answers to ambiguous questions. We begin with a formulation of criteria for what counts as a good long answer to an ambiguous question: + +- Completeness The long answer should contain all valid short answers $y_{1}, \ldots, y_{n}$ to the disambiguated questions $x_{1}, \ldots, x_{n}$ in an appropriate context. +- Comprehensiveness The long answer should provide enough details for the user to (i) understand the source of ambiguity in the original question and (ii) understand the relationship between different short answers. +- Fluency The long answer should be coherent and fluent. +- Attributability The long answer should be grounded in an underlying source of information (in our case, Wikipedia). + +# 3.2 ASQA Annotation Process + +To ensure that annotations satisfy the aforementioned objectives, we develop a custom annotation interface (Figure 2) and recruit native English speakers to perform our task. We then collect long-form answers for each target instance of AMBIGQA using a commercial crowdsourcing platform where it is possible to interact with the annotators on an ongoing basis. Let us now discuss the key components of our annotation pipeline. + +Input to Annotators The left side of Figure 2 illustrates the input to our annotation procedure. Annotators are given relevant aspects of the target AMBIGQA instance: the ambiguous question $q$ , list of disambiguations $\{(x_i, y_i)\}_{i=1}^n$ , and the Wikipedia pages $W$ visited by AMBIGQA annotators. Additionally, to help annotators understand the context behind the disambiguations without reading full Wikipedia articles, for each disambiguation $i$ we provide a (possibly empty) Wikipedia passage $C_i$ with information relevant to the disambiguation. Details on the procedure used to find these context passages $\{C_i\}_{i=1}^n$ are given in Appendix A. + +Output of Annotation The key output of annotation is a long-form answer $a$ to a given ambiguous question $q$ . Additional elements of the output are introduced to facilitate the requirement of attributability. In that, we require annotators to provide the source Wikipedia passage $e$ for each piece of additional information they bring to their answer. Our interface has designated fields for additional knowledge (see Figure 2) and annotators can add + +
SPLIT# QUESTIONS# ANNOTATIONS
TRAIN4,3531
DEV9482
TEST1,0152
+ +Table 1: Summary statistics of the ASQA dataset. + +as many of these fields as they need to include any number $m$ of evidence passages $\{e_j\}_{j = 1}^m$ + +Instructions, Training and Quality Control We carefully design instructions, a training procedure, and quality control tools to minimize the amount of noise in annotations. Details on these aspects of the annotation pipeline are provided in Appendix A. + +# 3.3 ASQA Dataset + +By following the procedure outlined above, we annotated train, dev, and test splits of the AMBIGQA dataset. Each question in the train split was annotated by a single annotator while the dev and test splits have two annotations per question. + +For 474 questions, our annotators raised concerns regarding the validity of the AMBIGQA disambiguations. Not all of these concerns necessarily indicate errors in the AMBIGQA dataset as some of them could be due to misinterpretation on the annotators' side. Nevertheless, to maintain data fidelity, we exclude the corresponding instances from the resulting dataset. Table 1 displays the final breakdown of the ASQA dataset. + +Table 2 compares ASQA to other open-domain QA datasets: ELI5, MS-NLG, AMBIGQA, and NQ-OPEN. We observe that ASQA requires long answers with an average length of 64.8 (vs. 103.0 for ELI5 and 14.6 for MS-NLG), and is the only dataset that admits evaluations in terms of both ROUGE, which is typically used for long-form QA, and accuracy, which is typically used for factoid QA. This makes ASQA an appealing dataset as it enables researchers to work on long-form QA while retaining the benefits of reliable objective evaluation typical in factoid QA. + +Additional Comparison to ELI5 ELI5 is the closest existing long-form QA dataset. We now provide additional comparison of ASQA and ELI5. + +Support Documents First, both ASQA and ELI5 supplement annotations with relevant information retrieved from Wikipedia (ASQA) or the whole Internet (ELI5). For ELI5, support documents are retrieved automatically and independently of the annotation process. The resulting documents + +contain, on average, 858 words. Manual analysis conducted by Fan et al. (2019) reveals that support documents are sufficient to answer $65\%$ of the questions and have information relevant to $92\%$ of the questions. + +In ASQA, support documents are constructed as a part of the annotation process. For each annotation, the support document contains disambiguations from AMBIGQA, context paragraphs, and additional knowledge provided by the corresponding annotator (see Section 3.2 for details). On average, support documents contain 225 words, being much shorter than those for ELI5. By design of our annotation procedure, support documents should be sufficient to write long-form answers to ambiguous questions. Indeed, we observe that $92\%$ of the annotations' tokens are present in the corresponding support documents. If we exclude AMBIGQA disambiguations from the support documents, their average length reduces to 172 words, but $78\%$ of tokens from the answers remain captured therein. These observations demonstrate that ASQA satisfies the requirement of attributability (Section 3.1). + +Inter-Annotator Agreement Second, we compare the inter-annotator agreement in ELI5 and ASQA that we measure as the mean ROUGE-L F1 score between each pair of annotations for the same question. Our analysis reveals that ASQA has a much higher level of inter-annotator agreement: 49.6 vs. 16.9 for ELI5. Thus, ASQA admits a more well-defined notion of ground truth than ELI5. + +Note that answers in ELI5 are written by Reddit users. Thus, they are inherently subjective and are not supposed to follow any predefined criteria. The diversity and subjectiveness could make human evaluation of the ELI5 answers challenging. In contrast, ASQA annotators follow common annotation guidelines and undergo a thorough training procedure, thereby aiming at generating answers that satisfy a set of well-defined criteria for human evaluation (Section 3.1). + +Overall, compared to other datasets, ASQA has some novel features that may be useful for future QA research. Its benefits, however, come at the cost of a much smaller sample size than that of MS-NLG and ELI5. Thus, we believe MS-NLG and ELI5 may be useful counterparts for ASQA + +
QA TASKDATASET#QASDEV SET STATISTICSEVALUATION
#A PER Q#WORDS IN AROUGEDISAMBIG-ACC
SHORT ANSWERNQ-OPEN91K1.82.2X✓†
AMBIGQA14,0422.82.4X
LONG FORMELI5272K12.0103.0X
MS-NLG183K1.714.6X
ASQA6,3162.064.8
+ +Table 2: Comparison of ASQA with existing open domain QA datasets. ASQA is the only QA dataset that allows for both ROUGE and accuracy evaluations. ${}^{ \dagger }$ Standard accuracy for non-ambiguous questions. + +as they can be used for pre-training (that said, we leave this exploration to future work). + +# 4 ASQA Metrics + +In this section, we introduce metrics that we propose to evaluate performance on the ASQA task. + +# 4.1 Automated Evaluation + +We evaluate performance on the ASQA task along the following two aspects. + +ROUGE Following the conventional approach for measuring the quality of generated text, we report the ROUGE-L score (Lin, 2004) in a multi-reference setup. Given that each example in the development and test sets is annotated by two annotators, we compare predictions against both answers and take the maximum of these two scores to be the score of the prediction. + +Disambiguation Metrics A good long-form answer to an ambiguous question should contain short answers to all disambiguated questions as well as the context necessary to understand the source of ambiguity and the relationship between the short answers. However, ROUGE-L is not well suited for evaluating these aspects as it may fail to distinguish between two fluent and stylistically similar answers which provide considerably different information. Therefore, we complement ROUGE-L with two metrics that are specifically designed to capture the completeness and comprehensiveness aspects of our task: + +- STR-EM (String Exact Match) The fraction of disambiguations for which the corresponding short answer is present in the long answer (exact match). The fraction is computed within each question and then averaged across all questions. + +- Disambig-F1 We follow the reading comprehension literature (Rajpurkar et al., 2016, 2018) and use Roberta (Liu et al., 2019) trained on SQUADv2 to evaluate the fraction of disambiguated questions that can be answered from the predicted long answers. For each disambiguation $(x_{i}^{(k)},y_{i}^{(k)})$ in the $k$ -th example, we apply the SQUADv2 model on the generated long-form answer $\hat{a}^{(k)}$ to predict short answer $\hat{y}_i^{(k)}$ to question $x_{i}^{(k)}$ . Let $\phi$ denote a function that computes the token-level F1 score between the predicted short answer $\hat{y}_i^{(k)}$ and the ground truth short answer $y_{i}^{(k)}$ after normalizing answer strings in the manner done for SQUADv2 evaluations. Then the Disambig-F1 score is given by: + +$$ +\mathrm {D i s a m b i g - F 1} = \frac {1}{N} \sum_ {k} \frac {1}{n ^ {(k)}} \sum_ {i} \phi (\hat {y} _ {i} ^ {(k)}, y _ {i} ^ {(k)}), +$$ + +where $N$ indicates the total number of instances being evaluated, and $n^{(k)}$ indicates the number of disambiguations for the $k$ -th instance. + +Overall: DR Score Both ROUGE-L and disambiguation metrics are crucial for our task. Hence, we propose an overall DR (Disambiguation-Rouge) score that combines the two metrics as follows: + +$$ +\mathrm {D R} = \sqrt {\text {D i s a m b i g - F 1} \times \text {R O U G E - L}}. +$$ + +We choose the geometric mean for aggregation to penalize methods that maximize one metric at a cost of the other. Note that STR-EM and Disambig-F1 aim at measuring the same aspect so we include only one of these metrics in the DR score. + +# 4.2 Human Evaluation + +We also design an interface for human evaluations for the ASQA task with the following metrics. + +- Disambiguation Accuracy For each long-form answer, we ask human annotators to verify whether each disambiguated question from the AMBIGQA dataset can be correctly answered using the provided information. We then report the average number of disambiguations that are captured in the long-form answers (ACC). +- Pairwise Comparisons We propose a pairwise evaluation scheme where annotators need to compare two long-form answers to the same question. We ask annotators to choose the better answer in terms of each of the three criteria: Comprehensiveness (COMP), Fluency (FLUE), and Human Overall impression (HO). In each pairwise comparison, an answer is given one point for victory and half for a tie. We then normalize model scores into percentages by dividing the total number of points a model receives by the number of pairwise comparisons. + +# 5 Experimental Setup + +We now describe the baseline models and human answers used in our experiments. + +# 5.1 Models + +We include the following models for comparison. + +Naïve The naïve model (denoted as QUESTION) repeats the ambiguous question eight times. + +Retrieval-Only The retrieval-only models retrieve a Wikipedia passage as the answer: + +- DPR@1. DPR (Karpukhin et al., 2020) is a BERT-based dual encoder trained on NQ. +- JPR@1. JPR (Min et al., 2021) trains a reranker on top of DPR for questions with multiple answers in AMBIGQA. The JPR model is the state of the art retriever for AMBIGQA. + +Generative We also evaluate T5-large based generative models (Raffel et al., 2019) in two regimes: + +- T5 Closed Book (T5-C). We train T5 to answer ambiguous questions without providing any additional passages from Wikipedia. The model only relies on its pretrained knowledge to answer the question (Roberts et al., 2020). +- T5 Open Book (T5-O). The T5 model is additionally provided with context paragraphs retrieved by JPR. We vary the number of top- $K$ retrieved paragraphs used as input to T5, denoting the corresponding model as T5-O-K. + +Oracle To investigate the headroom in retrieval systems, we experiment with an ORACLE system: T5-large provided with the gold supporting documents. The input to ORACLE includes all the disambiguations $\{(x_i,y_i)\}_{i = 1}^n$ and contexts $\{C_i\}_{i = 1}^n$ shown to the annotators (left half of Figure 2), as well as the additional knowledge pieces $\{e_j\}_{j = 1}^m$ identified by one of the two annotators (the one with the longest answer). This system can be thought of as a generative model that has access to a perfect retriever. In evaluations, we compute ROUGE-L by comparing the answer predicted by ORACLE against the answer of the annotator whose additional knowledge pieces were not in the input of ORACLE (instead of the usual comparison against two references). + +Appendix B provides more details on the modeling aspects of our evaluations. + +# 5.2 Human Performance + +We also evaluate two sets of human answers: + +- Human performance with context (HP-w/-C). We use reference ASQA answers in our comparisons. Recall that the ASQA annotators were provided with context: disambiguations from AMBIGQA $\{(x_i, y_i)\}_{i=1}^n$ and context paragraphs we retrieved $\{C_i\}_{i=1}^n$ . We consider performance in this setup as an upper bound on the human performance. In evaluations of ROUGE-L, we compute the score of HP-w/-C by comparing the answers from two annotators against each other (instead of the usual comparison against two references). +- Human performance without context (HP-w/O-C). To establish a conservative lower bound on human performance, we additionally annotate 200 questions from the ASQA dev set (one annotation per question) in the "no context" regime. Annotators in this regime are only given ambiguous questions as input (no disambiguations or context paragraphs) and need to search for disambiguations and the required additional information on their own. + +# 6 Results + +We evaluate all models introduced above in the automated evaluations. Additionally, we conduct a small-scale human study involving a subset of models to provide some verification of the automated evaluation results. Specifically, our human study + +
LEN (WRDS)ROUGE-LSTR-EMDISAMBIG-F1DR
QUESTION71.615.31.20.21.5
DPR@199.931.130.116.722.8
JPR@1196.827.945.025.826.9
T5 CLOSED BOOK (T5-C)62.531.010.37.415.1
T5 OPEN BOOK 1 PASSAGE (T5-O-1)63.036.533.621.227.9
T5 OPEN BOOK 3 PASSAGES (T5-O-3)71.138.839.925.131.2
T5 OPEN BOOK 5 PASSAGES (T5-O-5)71.639.241.026.432.1
T5 OPEN W/ ORACLE CONTEXT (ORACLE)82.646.6*88.759.252.5*
HUMAN W/O CONTEXT (HP-W/O-C)73.542.251.839.040.6
HUMAN W/ CONTEXT (HP-W/-C)64.849.4*98.477.461.8*
+ +involves four model outputs (JPR@1, T5-C, T5-O-1, T5-O-5) and two sets of human-generated answers (HP-w/o-C, HP-w/-C) that are juxtaposed on a subset of 45 randomly chosen questions from the development set of ASQA. For each of the questions, six target answers are split into three pairs and pairwise comparisons are conducted by authors of this paper in a blind manner. + +Importance of Retrieval Models that take the output of a retrieval system (T5-O-1/3/5) perform much stronger than the closed-book model (T5-C) on both automated metrics and the human evaluation. T5-O-1 outperforms T5-C by 20.0 points on human evaluation (HO) and by 12.8 points on DR. T5-O-5 outperforms T5-C by 15.6 points on HO and by 17.0 points on DR. + +Following Krishna et al. (2021), we also experimented with a random retrieval baseline where, during inference, the model was provided randomly selected retrieved passages from the training set. This baseline gets a DR of only 7.8, further confirming that, different from ELI5, retrieval is very important for ASQA. + +Importance of Summarization Retrieval is very important for ASQA, but just using the top retrieved passage from a strong system (JPR@1) is not sufficient. Even though the STR-EM and Disambig-F1 metrics of JPR@1 are considerably higher than these of T5-O-1 (by 11.4 and 4.6 points, respectively), the human overall impression score HO and the DR score are similar across these models. This discrepancy is observed because the disambiguation metrics do not evaluate the conciseness of the answers, and the advantage of JPR@1 on these metrics is gained at the cost of + +Table 3: Evaluation of baselines on the dev set of the ASQA task. T5 models with passages retrieved by JPR are the best models, but there is a large gap between human performance and model performance on all metrics. *As explained in Section 5, for ORACLE and HP-W/-C we only use one of the references to compute ROUGE-L. + +
ACCCOMPFLUEHO
JPR@136.144.442.237.8
T5 C8.435.632.221.1
T5 O-125.736.738.941.1
T5 O-528.036.737.836.7
HP-w/O-C52.760.066.774.4
HP-w/-C94.386.782.288.9
+ +Table 4: Results of human evaluations executed on a set of 45 questions from the development set of ASQA. The scores are in percentage and larger values are better. All metrics are specified in Section 4.2. + +the increased answer length (196.8 words). In contrast, T5 models tend to generate shorter answers whose length is much closer to the average length of human references (65 words). Hence, in addition to including the correct information, answers in ASQA must be concise which highlights the importance of summarization. + +Correlation with Human Judgments Table 5 reports Pearson correlations between different automated metrics and the human judgments, enabling us to study the validity of the automated metrics. + +First, we observe that Disambig-F1 is better correlated with human evaluations than ROUGE-L. That said, we note that ROUGE-L is an important metric as it enforces concise answers. + +Second, observe that Disambig-F1 scores (Table 3) underestimate the human evaluations of ACC (Table 4). This discrepancy is likely due to: (i) a distribution shift between ASQA and SQUADv2; and (ii) the presence of distracting answers from the other disambiguated questions in the long answers, which are known to degrade QA models' accuracy (Jia and Liang, 2017). However, almost perfect correlation between Disambig-F1 and ACC + +
ROUGE-LDISAMBIG-F1DR
ACC81.199.397.9
COMP79.396.493.7
FLUE83.494.494.4
HO86.492.995.0
+ +Table 5: Correlation between human and automated metrics. DR has the highest correlation with the overall human score HO among all automated metrics. + +(99.3) implies that this discrepancy does not impact the ordering of the different systems, thereby enabling us to meaningfully evaluate the relative differences in performance. Additionally, the presence of strong distractors ensures that the Disambig-F1 metric cannot be easily gamed by mentioning all the short answers without appropriate context. + +Finally, we note that the DR score has the highest correlation with the overall human judgment HO among all automated metrics. While the difference with Disambig-F1 is not statistically significant, this observation hints at the importance of combining ROUGE-L and Disambig-F1 in the overall metric to take a holistic view on the model performance. + +Remaining Headroom Both the upper bound (61.8 DR and 88.9 HO) and the lower bound (40.6 DR and 74.4 HO) on human performance significantly exceed the best model performance (T5-O-5 with 32.1 DR and 36.7 HO). Hence, there is a lot of headroom for the community to explore in ASQA. We report some additional insights that may be helpful for future work in Section 7. + +# 7 Analysis + +We now conduct additional analysis that provides insights on the ASQA task. + +Headroom in Summarization As shown in Figure 3, the Disambig-F1 score of retrieval-based methods increases considerably as the number of retrieved passages increases. However, there is a big gap between T5 and JPR, even though T5 takes the output passages from JPR as an input. This indicates that T5 tends to either lose information while summarizing the passages or produce outputs that are inconsistent with its input. Moreover, the Disambig-F1 of JPR@5 already exceeds the lower bound on human performance. Thus, progress in summarization alone may be sufficient to raise the overall level of performance on ASQA to this lower bound. + +![](images/1c077f840670bc7e51d7dc2f9a67bae4b954b75ba4d23d8d7b9357160601ebe5.jpg) +Figure 3: Disambig-F1 of different methods with a varying number of retrieved passages. Marker sizes are proportional to the answer lengths. The T5-O-K score increases with $K$ but there is also an increasing gap between T5-O-K and JPR@K. Passages from the latter are used as input for the former. + +To provide further insight into the summarization aspect of our task, we conduct a manual analysis of the answers generated by the open-book T5-O-5 model. Our analysis identifies several characteristic mistakes (hallucination, questions misunderstanding, and repetitions) that need to be addressed to improve performance on the ASQA task. More details on this evaluation are provided in Appendix C. + +Headroom in Retrieval Figure 3 compares models by Disambig-F1 and the higher score means that the passage generated by a model provides answers to more disambiguated questions. We observe that the best-performing retrieval system, JPR@5, lags behind the output of the ORACLE model by 14.4 and the human upper bound by 32.6. Hence, improving the retrieval step for ASQA is also important. + +# 8 Conclusion + +In contrast to existing datasets for long-form QA, ASQA admits a clear notion of correctness that we use to define an overall metric of performance (DR). Our empirical evaluations demonstrate that DR correlates well with the human judgment; and there is a large gap between human performance and the strong baselines. Thus, we believe that ASQA is an appealing task for the QA community. Our analysis suggests that strong performance on ASQA is contingent upon both high-quality retrieval and summarization. These aspects constitute important directions for future work on ASQA. + +# 9 Limitations + +We now make two remarks that we urge the reader to consider when interpreting the results of this work. + +Inter-Annotator Agreement In Section 3.3, we observed that inter-annotator agreement in ASQA is higher than in ELI5. We note, however, that the high inter-annotator agreement in ASQA is contingent upon the high inter-annotator agreement in the AMBIGQA dataset. Indeed, AMBIGQA disambiguations serve as a shared source of information between the two ASQA annotators working on the same instance, potentially inflating the level of agreement. + +That said, Min et al. (2020) observe that human annotators have a decent level of agreement in constructing the disambiguations in AMBIGQA, thereby supporting the observation that ASQA is more objective than ELI5. + +Evaluation Metrics Second, we caveat that our accuracy metrics (STR-EM and Disambig-F1) only measure the recall of the required information in the long answers. In cases where the long answer hallucinates incorrect disambiguations or facts, the accuracy metrics may still be high as long as the correct disambiguations are included. We note, however, that this unnecessary extra information may still be penalized by the ROUGE-L metric. Moreover, in the presence of distractors, we also expect the accuracy of the Roberta model used for reading comprehension to degrade, thereby effectively penalizing a low precision. + +On a separate note, the Disambig-F1 metric requires a high-accuracy QA system. Hence, for domains that are significantly different from Wikipedia, fine-tuning the Roberta SQUADv2 model on the task might be important to ensure the effectiveness of the Disambig-F1 metric. + +# Acknowledgements + +We thank Kristina Toutanova, Kenton Lee, Shashi Narayan for their valuable insights and feedback. We want to specially thank Sewon Min for discussions, as well as for sharing the implementation details on JPR. We also thank anonymous reviewers for providing detailed and insightful comments on our work. + +# References + +Abdalghani Abujabal, Rishiraj Saha Roy, Mohamed Yahya, and Gerhard Weikum. 2019. ComQA: A community-sourced dataset for complex factoid question answering with paraphrase clusters. In Proceedings of the 2019 Conference of the North American Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 307-317, Minneapolis, Minnesota. Association for Computational Linguistics. + +Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877-1901. Curran Associates, Inc. + +Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wentau Yih, Yejin Choi, Percy Liang, and Luke Zettlemoyer. 2018. QuAC: Question answering in context. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2174-2184, Brussels, Belgium. Association for Computational Linguistics. + +Hoa Trang Dang. 2005. Overview of duc 2005. In Proceedings of the document understanding conference, volume 2005, pages 1-12. + +Esin Durmus, He He, and Mona Diab. 2020. FEQA: A question answering evaluation framework for faithfulness assessment in abstractive summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5055-5070, Online. Association for Computational Linguistics. + +Alexander Fabbri, Irene Li, Tianwei She, Suyi Li, and Dragomir Radev. 2019. Multi-news: A large-scale multi-document summarization dataset and abstractive hierarchical model. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1074-1084, Florence, Italy. Association for Computational Linguistics. + +Angela Fan, Yacine Jernite, Ethan Perez, David Grangier, Jason Weston, and Michael Auli. 2019. ELI5: Long form question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3558-3567, Florence, Italy. Association for Computational Linguistics. + +Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Ming-Wei Chang. 2020. REALM: retrieval + +augmented language model pre-training. CoRR, abs/2002.08909. +Or Honovich, Leshem Choshen, Roee Aharoni, Ella Neeman, Idan Szpektor, and Omri Abend. 2021. $q^2$ : Evaluating factual consistency in knowledge-grounded dialogues via question generation and question answering. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7856-7870, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. +Gautier Izacard and Edouard Grave. 2021. Leveraging passage retrieval with generative models for open domain question answering. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 874-880, Online. Association for Computational Linguistics. +Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2021-2031, Copenhagen, Denmark. Association for Computational Linguistics. +Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1601-1611, Vancouver, Canada. Association for Computational Linguistics. +Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769-6781, Online. Association for Computational Linguistics. +Tomáš Kocisky, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, Gábor Melis, and Edward Grefenstette. 2018. The NarrativeQA reading comprehension challenge. Transactions of the Association for Computational Linguistics, 6:317-328. +Kalpesh Krishna, Aurko Roy, and Mohit Iyyer. 2021. Hurdles to progress in long-form question answering. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4940-4957, Online. Association for Computational Linguistics. +Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: A benchmark for question answering + +research. Transactions of the Association for Computational Linguistics, 7:452-466. +Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6086-6096, Florence, Italy. Association for Computational Linguistics. +Patrick S. H. Lewis, Ethan Perez, Aleksandra Pik-tus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kuttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. 2020. Retrieval-augmented generation for knowledge-intensive NLP tasks. CoRR, abs/2005.11401. +Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics. +Peter J Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. 2018. Generating wikipedia by summarizing long sequences. arXiv preprint arXiv:1801.10198. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692. +Sewon Min, Kenton Lee, Ming-Wei Chang, Kristina Toutanova, and Hannaneh Hajishirzi. 2021. Joint passage ranking for diverse multi-answer retrieval. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6997-7008, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. +Sewon Min, Julian Michael, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2020. Ambiguous open-domain questions. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5783-5797, Online. Association for Computational Linguistics. +Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, et al. 2021. Webgpt: Browser-assisted question-answering with human feedback. arXiv preprint arXiv:2112.09332. +Preksha Nema, Mitesh M. Khapra, Anirban Laha, and Balaraman Ravindran. 2017. Diversity driven attention model for query-based abstractive summarization. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1063-1072, Vancouver, Canada. Association for Computational Linguistics. + +Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A human generated Machine Reading CComprehension dataset. CoRR, abs/1611.09268. +Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. CoRR, abs/1910.10683. +Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswerable questions for SQuAD. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 784-789, Melbourne, Australia. Association for Computational Linguistics. +Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383-2392, Austin, Texas. Association for Computational Linguistics. +Siva Reddy, Danqi Chen, and Christopher D. Manning. 2019. CoQA: A conversational question answering challenge. Transactions of the Association for Computational Linguistics, 7:249-266. +Adam Roberts, Colin Raffel, and Noam Shazeer. 2020. How much knowledge can you pack into the parameters of a language model? In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5418-5426, Online. Association for Computational Linguistics. +Claude Sammut and Geoffrey I. Webb, editors. 2010. TF-IDF, pages 986-987. Springer US, Boston, MA. +Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F Christiano. 2020. Learning to summarize with human feedback. Advances in Neural Information Processing Systems, 33:3008-3021. +Haitian Sun, William W Cohen, and Ruslan Salakhutdinov. 2021. Conditionalqa: A complex reading comprehension dataset with conditional answers. arXiv preprint arXiv:2110.06884. +Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. 2017. NewsQA: A machine comprehension dataset. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 191-200, Vancouver, Canada. Association for Computational Linguistics. +Ellen M. Voorhees and Dawn M. Tice. 2000. Building a question answering test collection. In Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information + +Retrieval, SIGIR '00, page 200-207, New York, NY, USA. Association for Computing Machinery. +Alex Wang, Kyunghyun Cho, and Mike Lewis. 2020. Asking and answering questions to evaluate the factual consistency of summaries. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5008-5020, Online. Association for Computational Linguistics. +Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics. +Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2369-2380, Brussels, Belgium. Association for Computational Linguistics. +Ming Zhong, Da Yin, Tao Yu, Ahmad Zaidi, Mutethia Mutuma, Rahul Jha, Ahmed Hassan Awadallah, Asli Celikyilmaz, Yang Liu, Xipeng Qiu, and Dragomir Radev. 2021. QMSum: A new benchmark for query-based multi-domain meeting summarization. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5905-5921, Online. Association for Computational Linguistics. + +# Appendix + +We now provide additional discussion of several aspects of this work. + +# A Additional Details on the Annotation Procedure + +We begin with an additional discussion of the annotation procedure. + +Construction of Context Paragraphs As discussed in Section 3, in our annotation task, we supplement each disambiguation $(x_{i},y_{i})$ from AMBIGQA with a context passage $C_i$ . Let us now describe the procedure used to construct these context passages. + +For each disambiguation $(x_{i},y_{i})$ , we execute the following three-stage procedure: + +1. Among all paragraphs from Wikipedia pages $W$ visited by AMBIGQA annotators, we select those that contain $y_{i}$ . +2. We compute TF-IDF similarity (Sammut and Webb, 2010) between the selected paragraphs and $x_{i}$ . +3. If the highest similarity exceeds a certain empirically selected threshold, we use the corresponding paragraph as an additional context $C_i$ provided to annotators. Otherwise, we do not provide context for that disambiguation $(C_i = \emptyset)$ . The threshold was selected by the manual analysis of a subset of questions-context pairs. Our criteria was to avoid confusing (e.g., irrelevant) context paragraphs and we qualitatively selected the threshold according to this criteria. + +Following this procedure, we were able to provide non-empty additional context passages for $45\%$ of all disambiguations used in our annotation procedure. + +Instructions and Training The instructions for our task are written along the lines of the four criteria we discussed in Section 3.1 and are provided in supplementary materials. In addition to the detailed instructions, we carefully design the training procedure to minimize the amount of noise in the annotations. In that, before being accepted to the main task, annotators go through the following three-step training procedure: + +1. Self-study session First, we give annotators a short version of the instructions. They study + +them on their own and then annotate three sample questions. + +2. In-person session Following the self-study session, we have an online video session in which we walk annotators through the full version of the instructions and discuss mistakes made in the self-study annotations. +3. Exam session Finally, annotators complete a five-question exam. We manually evaluate all the exam answers and share personal feedback with annotators. + +In total, 27 annotators went through our training procedure and all of them were eventually accepted to work full-time on the main task. We note that the quality of answers in the self-study session was very diverse with some annotators making critical mistakes (e.g., not covering some of the disambiguations). However, the in-person session proved to be efficient in helping annotators to understand the requirements, leading to exam answers of consistently high quality. + +Quality Control and Feedback Next, we discuss additional steps we took to help annotators in writing answers that satisfy the objectives formulated in Section 3.1. First, we added an automated check to our interface that warns annotators if any of the short answers $\{y_{i}\}_{i = 1}^{n}$ is missing from their long-form answer. Annotators were able to override the warning if they believe that an equivalent formulation of the missing short answer is already included. For example, given two disambiguations with short answers "four seasons" and "4 seasons", annotators were instructed to use any of these two equivalent options. + +Second, in addition to the carefully designed training procedure, we were also continuously monitoring the annotators' performance as they were going through the task. In that, we were giving regular constructive feedback that highlighted areas of improvement and pointed out mistakes identified in annotators' past answers. While we did not observe any significant decay in quality between the exam session and the main task annotation, we believe that continuous monitoring is crucial to avoid creating an incentive for annotators to reduce the amount of effort they put into the task. + +Finally, to ensure that annotators did not have to guess when they met some situation not explained in the instructions, we maintained an FAQ document in which annotators could ask their questions + +and receive an answer within a day. To support this mechanism, we allowed annotators to "park" an annotation task they were unsure about and return to it after they have their concerns resolved. + +Annotators' Well-Being For this study, we recruited annotators who were fully dedicated to our task (8 hours a day for 5 days a week). To reduce the pressure on annotators and allow them to work at a comfortable pace, we gave annotators one hour to answer each question and recommended answering ten or more questions per day. On average, it took annotators 15 minutes to answer each question with the time consumption slightly decreasing as annotators get familiar with the task. The compensation rate for the task was set to be $17.8/hour which is higher than the minimum hourly wage in the US. + +# B Additional Details on Modeling + +In this section, we provide additional details on the modeling aspect of our evaluations. + +Input Format Figures 4 and 5 provide schematic representations of inputs to the T5-O-K and OR-ACLE models, respectively. Bold black text represents tags that separate conceptually different parts of the input, text in blue is replaced with the instance-specific content in the actual training and evaluation data. + +The input to T5-O-K is simpler and consists of two parts separated by the context tag: an ambiguous question and $K$ retrieved passages. Each retrieved passage consists of the info field that contains the retrieved passage and the webpage field that displays the title of the source Wikipedia page. Retrieved passages are separated with the pipe symbol "I". + +The input to the ORACLE model is more complex and has five parts: + +- An ambiguous question $q$ +- Short answers $\{y_{i}\}_{i = 1}^{n}$ (answers) +- Disambiguated questions $\{x_{i}\}_{i = 1}^{n}$ (disambiguations) +- Context paragraphs $\{C_i\}_{i=1}^n$ (context1) + +$ambiguous_question context: info: $retrieved Passage_1 +wikipedia: $source_of Passage_1 | ... | info: +$retrieved Passage_K.wikipedia: $source_of Passage_K + +Figure 4: Input to the T5-O-K model. + +Additional knowledge pieces provided by the annotator $\{e_j\}_{j = 1}^m$ (context2) + +Similarly to the T5-O model, context paragraphs and additional knowledge pieces have info and wikipage fields, and the pipe symbol "l" is used to separate elements in the list. + +$ambiguous_question answers: $short_answer_1 | ... | +$short_answer_n disambiguations: $disambiguated_question_1 | +| ... | $disambiguated_question_n context1: info: +$contextParagraph_1 wiki: $source_of_context_1 | ... | +info: $contextParagraph_n wiki: $source_of_context_n +context2: info: $additional知識eme_1 wiki: +$source_of��kledge_1 | ... | info: $additional知識eme_m +wikipe: $source_of��kledge_m + +Figure 5: Input to the ORACLE model. + +Parameter Choice We use the context length of 512, 1024, and 2048 for the T5-O-1, T5-O-3, and T5-O-5 models, respectively. We use batch size of 8 across the three models. For T5-C, we use a batch size of 16 with a context length of 128. + +# C Qualitative Analysis + +To provide further insight into the importance of the generation aspect of our task, we conduct a manual analysis of the answers generated by the T5 open-book model. Our main observation is that even if the knowledge necessary to answer an ambiguous question is successfully retrieved, T5 often struggles to provide a high-quality answer. Table 6 demonstrates several characteristic mistakes that we identify. + +Hallucination The first two rows of Table 6 demonstrate examples of hallucination in the T5-generated answers. In the first example, T5 hallucinates facts about the 2016 elections (there were no elections in 2016) and about the winner of the 2017 elections (Rick Baker did not win the elections). In the second example, T5 starts with a wrong disambiguation (dragons do not marry people) and then mixes up facts about two characters from different books (Daenerys Targaryen and Elizabeth/Liz Pennykettle) into one. + +Question Misunderstanding Another issue we observe in the T5-generated answers is that sometimes the answers provide a coherent story that is relevant to the question but does not answer it. This problem is illustrated in the third row of Table 6 where the question "When was «under God» added to the Pledge of Allegiance?" is answered with a + +
Question and disambiguationsT5 outputJPR@5 relevant segmentsError analysis
Q: Who won the mayor race in st petersburg florida?Disambig-Q1: Who won the 2017 mayor race in st petersburg florida?Kriseman | Rick Kriseman Disambig-Q2: Who won the 2013 mayor race in st petersburg florida?Kriseman | Rick Kriseman Disambig-Q3: Who won the 2009 mayor race in st petersburg florida?Foster | Bill Fosterthe city of st. petersburg, florida has had several may-oral elections. democratick rissemann won the 2016 mayoral election, while re- publican former mayor rick baker did so in the 2017 mayoral election. krise- man defeated baker in the 2017 general election with 51.62% of the vote.2017 St. Petersburg, Florida mayoral election: ...Incumbent mayor Rick Kriseman, a Democrat, opted to seek re-election. His opponent in the runoff was Republican former mayor Rick Baker ...Incumbent Rick Kriseman won the re-election...Rick Kriseman: On November 7, 2017, Kriseman was re-elected for a second term, winning the general election with 51.62% of the vote. He defeated former Republican Mayor Rick Baker...T5 hallucinates (i) the 2016 mayoral elec-tions (there were no elections in 2016) and (ii) the winner of the 2017 mayoral elections (Kriseman won elections, but the model claims Baker to be the winner).
Q: Who was the mother of dragons married to?Disambig-Q1: Who was the mother of dragons first married to?Khal Drogo | Drogo Disambig-Q2: Who was the mother of dragons married to second?Hizdahr zo LoraqDisambig-Q3: What leader of Dothraki is the mother of dragons married to?Khal DrogoDisambig-Q4: What Ghiscari noble is the mother of dragons married to?Hizdahr zo Loraqdragons are often married to multiple people in a song of ice and fire storyline. in a song of ice and fire, the mother of dragons is known as elizabeth/liz pennykettle, a woman probably in her thirties who makes dragons out of clay and sells them at pottery fairs. she is the mother of lucy pennyket- tle, wife of the blinded ex- monk arthur, and landlady of david rain. elabeth/liz pennykettle - a woman prob- ably in her thirties who makes dragons out of clay and sells them at pottery fairs. she is the mother of lucy pennykettle, wife of the blinded ex-monk arthur, and landlady of david rain.A Song of Ice and Fire: ...Daenerys is married off by her elder brother Viserys Targaryen to ...Daenerys Targaryen: Viserys marries Daenerys to the pow- erful Dothraki warlord Khal Drogo in exchange for his mil- itary support ...The Last Dragon Chronicles: Elizabeth/Liz Pennykettle - A woman probably in her thirties who makes dragons out of clay and sells them at pottery fairs.In the first sentence of the answer, T5 hallucinates that dragons are married to humans in the A Song of Ice and Fire storyline. In the second sentence, T5 suddenly jumps from a discussion of A Song of Ice and Fire to The Last Dragon Chronicles, thereby misleading the reader.The blue text high- lights repeated con- tent.
Q: When was under god added to the pledge of allience?Disambig-Q1: When was under god officially added to the pledge of allience?June 14, 1954Disambig-Q2: When did Bowman popularly introduce the pledge of allience with under god added to a society meeting in Chicago?February 12, 1948Disambig-Q3: When was the bill signed into law that under god was added to the pledge of Alliegence?Flag Day | June 14, 1954Disambig-Q4: When was under god added to the pledge of Alliegence for the first time?February 12, 1948Disambig-Q5: When did the board of directors of the Knights of Columbus adopt a resolution to add under god to the pledge of Alliegence?April 30, 1951the pledge of allegiance of the united states is an expression of allegiance to the flag of the united states and the republic of the united states of america. it was originally composed by cap- tain george thatcher balch, a union army officer dur ing the civil war and later a teacher of patriotism in new york city schools. the form of the pledge used today was largely devised by francis bellamy in 1892, and for- mally adopted by congress as the pledge in 1942. the most recent alteration of its wording came on flag day, june 14, 1954. [Miss- ing mention of "under god" words]Pledge of Allegiance Louis Al- bert Bowman, an attorney from Illinois, was the first to suggest the addition of "under God" to the pledge ...At a meeting on February 12, 1948, he led the society in reciting ...Pledge of Allegiance In 1951, the Knights of Columbus, the world's largest Catholic frat- nal service organization, also began including the words "un- der God" in the Pledge of Allie- giance.Pledge of Allegiance Congress passed the necessary legisl- tion and Eisenhower signed the bill into law on Flag Day, June 14, 1954. Eisenhower said: The phrase "under God" was incorporated into the Pledge of Allegiance on June 14, 1954, by a Joint Resolution of Congress amending § 4 of the Flag Code enacted in 1942.The T5 output intro- duces the Pledge of Allegiance and men- tions some of the right dates (June 14, 1954), but does not mention that alter- ation on June 14, 1954, included the words "under god" to the Pledge.
+ +history of the Pledge of Allegiance but does not mention the target phrase («under God»). + +Repetitions Finally, we observe a somewhat technical issue of repetitions in the generated answers, as shown in the second row of Table 6. \ No newline at end of file diff --git a/asqafactoidquestionsmeetlongformanswers/images.zip b/asqafactoidquestionsmeetlongformanswers/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..3aac14e8586ccfb109a5185a9510abf746442076 --- /dev/null +++ b/asqafactoidquestionsmeetlongformanswers/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:adae28e90638b8225af86a9a008c5e821504607caaa6cd4342be722d349b5085 +size 836895 diff --git a/asqafactoidquestionsmeetlongformanswers/layout.json b/asqafactoidquestionsmeetlongformanswers/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..207cf110fb263125a18dc36f53b0504371ddf4b1 --- /dev/null +++ b/asqafactoidquestionsmeetlongformanswers/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c1b30a5e95e0c409a147febca80640f38ee69520107921918660e7700db0f245 +size 439689 diff --git a/asurveyofactivelearningfornaturallanguageprocessing/a50155db-50c9-4386-a449-b4a803284856_content_list.json b/asurveyofactivelearningfornaturallanguageprocessing/a50155db-50c9-4386-a449-b4a803284856_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..a626ea7ab25dd85ae328448ecfa7309c29b97b67 --- /dev/null +++ b/asurveyofactivelearningfornaturallanguageprocessing/a50155db-50c9-4386-a449-b4a803284856_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9cf5122203fffc8f6d6532227fff2498d2cc96f23ad0af2b508f117262e485c8 +size 164266 diff --git a/asurveyofactivelearningfornaturallanguageprocessing/a50155db-50c9-4386-a449-b4a803284856_model.json b/asurveyofactivelearningfornaturallanguageprocessing/a50155db-50c9-4386-a449-b4a803284856_model.json new file mode 100644 index 0000000000000000000000000000000000000000..46fc28e5fa65b27b029e025251de0d3f19a9d981 --- /dev/null +++ b/asurveyofactivelearningfornaturallanguageprocessing/a50155db-50c9-4386-a449-b4a803284856_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8f75f2a1a676c016e4d314e31be3cc8c18e38b59556845f7b68ea6f65500501c +size 236068 diff --git a/asurveyofactivelearningfornaturallanguageprocessing/a50155db-50c9-4386-a449-b4a803284856_origin.pdf b/asurveyofactivelearningfornaturallanguageprocessing/a50155db-50c9-4386-a449-b4a803284856_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..51e0bfe08428a3bf632bde8716e39bf7b8ed2375 --- /dev/null +++ b/asurveyofactivelearningfornaturallanguageprocessing/a50155db-50c9-4386-a449-b4a803284856_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f5d4afea5ad7b5723e9b0332cbfc2c99eaab3f4b0b21f2c656a779c98d3eee92 +size 654172 diff --git a/asurveyofactivelearningfornaturallanguageprocessing/full.md b/asurveyofactivelearningfornaturallanguageprocessing/full.md new file mode 100644 index 0000000000000000000000000000000000000000..aa9a9d20ddc6504123b37c5943d76c48cd9a16b2 --- /dev/null +++ b/asurveyofactivelearningfornaturallanguageprocessing/full.md @@ -0,0 +1,612 @@ +# A Survey of Active Learning for Natural Language Processing + +Zhisong Zhang, Emma Strubell, Eduard Hovy + +Language Technologies Institute, Carnegie Mellon University + +zhisongz@cs.cmu.edu, strubell@cmu.edu, hovy@cmu.edu + +# Abstract + +In this work, we provide a literature review of active learning (AL) for its applications in natural language processing (NLP). In addition to a fine-grained categorization of query strategies, we also investigate several other important aspects of applying AL to NLP problems. These include AL for structured prediction tasks, annotation cost, model learning (especially with deep neural models), and starting and stopping AL. Finally, we conclude with a discussion of related topics and future directions. + +# 1 Introduction + +The majority of modern natural language processing (NLP) systems are based on data-driven machine learning models. The success of these models depends on the quality and quantity of the available target training data. While these models can obtain impressive performance if given enough supervision, it is usually expensive to collect large amounts of annotations, especially considering that the labeling process can be laborious and challenging for NLP tasks (§3.2). Active learning (AL), an approach that aims to achieve high accuracy with fewer training labels by allowing a model to choose the data to be annotated and used for learning, is a widely-studied approach to tackle this labeling bottleneck (Settles, 2009). + +Active learning has been studied for more than twenty years (Lewis and Gale, 1994; Lewis and Catlett, 1994; Cohn et al., 1994, 1996) and there have been several literature surveys on this topic (Settles, 2009; Olsson, 2009; Fu et al., 2013; Aggarwal et al., 2014; Hino, 2020; Schröder and Niekler, 2020; Ren et al., 2021; Zhan et al., 2022). Nevertheless, there is still a lack of an AL survey for NLP that includes recent advances. Settles (2009) and Olsson (2009) provide great surveys covering AL for NLP, but these surveys are now more than a decade old. In the meantime, the field of NLP has been transformed by deep learning. While + +![](images/f965fdf6cbf76ae005cbace5f409dff2038b5410b74bf641555a6b0c5c447f62.jpg) +Figure 1: Counts of AL (left) and "neural" (right) papers in the ACL Anthology over the past twenty years. + +other more recent surveys cover deep active learning, they are either too specific, focused only on text classification (Schröder and Niekler, 2020), or too general, covering AI applications more broadly (Ren et al., 2021; Zhan et al., 2022). Moreover, applying AL to NLP tasks requires specific considerations, e.g. handling complex output structures and trade-offs in text annotation cost ( $\S 3$ ), which have not been thoroughly discussed. + +In order to provide an NLP-specific AL survey, we start by searching the ACL Anthology for AL-related papers. We simply search for the keyword "active" in paper titles and then perform manual filtering. We also gradually include relevant papers missed by keyword search and papers from other venues encountered by following reference links throughout the surveying process. The distribution of AL-related papers in the ACL Anthology over the past twenty years is shown in Figure 1, which also includes rough counts of works concerning neural models by searching for the word "neural" in titles. The overall trend is interesting. There is a peak around the years of 2009 and 2010, while the counts drop and fluctuate during the mid-2010s, which corresponds to the time when neural models became prominent in NLP. We observe a renewed interest in AL research in recent years, which is + +Algorithm 1 A typical active learning procedure. +Input: An unlabeled data pool $\mathcal{U}$ +Output: The final labeled dataset $\mathcal{L}$ and trained model $\mathcal{M}$ 1: $\mathcal{L},\mathcal{U}\gets$ seed(U) Start (5.1) 2: $\mathcal{M}\leftarrow$ train(L,U) Model Learning (84) 3: while not stop Criterion() do Stop (5.2) 4: $\mathcal{I}\gets$ query(M,U) Query (82,83) 5: $\mathcal{I}'\gets$ annotate(I) Annotate (83) 6: $\mathcal{U}\gets \mathcal{U} - \mathcal{I};\mathcal{L}\gets \mathcal{L}\cup \mathcal{I}'$ 7: $\mathcal{M}\gets$ train(L,U) Model Learning (84) 8: return $\mathcal{L},\mathcal{M}_f$ + +primarily focused on deep active learning (Ren et al., 2021; Zhan et al., 2022). + +# 1.1 Overview + +We mainly examine the widely utilized pool-based scenario (Lewis and Gale, 1994), where a pool of unlabeled data is available and instances are drawn from the pool to be annotated. Algorithm 1 illustrates a typical AL procedure, which consists of a loop of instance selection with the current model and model training with updated annotations. The remainder of this survey is organized corresponding to the main steps in this procedure: + +- In §2, we discuss the core aspect of AL: Query strategies, with a fine-grained categorization over informativeness (§2.1), representativeness (§2.2) and the combination of these two (§2.3). +- In §3, we cover the two additional important topics of querying and annotating for NLP tasks: AL for structured prediction tasks (§3.1) and the cost of annotation with AL (§3.2). +- In §4, we discuss model and learning: the query-successor model mismatch scenario (§4.1) and AL with advanced learning techniques (§4.2). +- In §5, we examine methods for starting (§5.1) and stopping (§5.2) AL. + +In §6, we conclude with related and future directions. We also include representative AL works for various NLP tasks in Appendix A and some other aspects of AL for NLP in Appendix B. + +# 2 Query Strategies + +# 2.1 Informativeness + +Informativeness-based query strategies mostly assign an informative measure to each unlabeled instance individually. The instance(s) with the highest measure will be selected. + +# 2.1.1 Output Uncertainty + +Uncertainty sampling (Lewis and Gale, 1994) is probably the simplest and the most commonly + +utilized query strategy. It prefers the most uncertain instances judged by the model outputs. For probabilistic models, entropy-based (Shannon, 1948), least-confidence (Culotta and McCallum, 2005) and margin-sampling (Scheffer et al., 2001; Schein and Ungar, 2007) are three typical uncertainty sampling strategies (Settles, 2009). Schröder et al. (2022) revisit some of these uncertainty-based strategies with Transformer-based models and provide empirical results for text classification. For non-probabilistic models, similar ideas can be utilized, such as selecting the instances that are close to the decision boundary in an SVM (Schohn and Cohn, 2000; Tong and Koller, 2001). + +Another way to measure output uncertainty is to check the divergence of a model's predictions with respect to an instance's local region. If an instance is near the decision boundary, the model's outputs may be different within its local region. In this spirit, recent works examine different ways to check instances' local divergence, such as nearest-neighbour searches (Margatina et al., 2021), adversarial perturbation (Zhang et al., 2022b) and data augmentation (Jiang et al., 2020). + +# 2.1.2 Disagreement + +Uncertainty sampling usually considers the outputs of only one model. In contrast, disagreement-based strategies utilize multiple models and select the instances that are most disagreed among them. This is also a widely-adopted algorithm, of which the famous query-by-committee (QBC; Seung et al., 1992) is an example. The disagreement can be measured by vote entropy (Engelson and Dagan, 1996), KL-divergence (McCallum and Nigam, 1998) or variation ratio (Freeman, 1965). + +To construct the model committee, one can train a group of distinct models. Moreover, taking a Bayesian perspective over the model parameters is also applicable (Houlsby et al., 2011). Especially with neural models, (Gal and Ghahramani, 2016) show that dropout could approximate inference and measure model uncertainty. This deep Bayesian method has been applied to AL for computer vision (CV) tasks (Gal et al., 2017) as well as various NLP tasks (Siddhant and Lipton, 2018; Shen et al., 2018; Shelmanov et al., 2021). + +# 2.1.3 Gradient + +Gradient information can be another signal for querying, with the motivation to choose the instances that would most strongly impact the model. + +In this strategy, informativeness is usually measured by the norm of the gradients. Since we do not know the gold labels for unlabeled instances, the loss is usually calculated as the expectation over all labels. This leads to the strategy of expected gradient length (EGL), introduced by Settles et al. (2007) and later applied to sequence labeling (Settles and Craven, 2008) and speech recognition (Huang et al., 2016). Zhang et al. (2017) explore a variation for neural networks where only the gradients of word embeddings are considered and show its effectiveness for text classification. + +# 2.1.4 Performance Prediction + +Predicting performance can be another indicator for querying. Ideally, the selected instances should be the ones that most reduce future errors if labeled and added to the training set. This motivates the expected error reduction strategy (Roy and McCallum, 2001), which chooses instances that lead to the least expected error if added to retrain a model. This strategy can be computationally costly since retraining is needed for each candidate. + +Recently, methods have been proposed to learn another model to select instances that lead to the fewest errors, usually measured on a held-out development set. Reinforcement learning and imitation learning have been utilized to train such policy models (Bachman et al., 2017; Fang et al., 2017; Liu et al., 2018a,b). This learning-to-select strategy may have some constraints. First, it requires labeled data (maybe from another domain) to train the policy. To mitigate this reliance, Vu et al. (2019) use the current task model as an imperfect annotator for AL simulations. Moreover, the learning signals may be unstable for complex tasks, as Koshorek et al. (2019) show for semantic tasks. + +A similar and simpler idea is to select the most erroneous or ambiguous instances with regard to the current task model, which can also be done with another performance-prediction model. Yoo and Kweon (2019) directly train a smaller model to predict the instance losses for CV tasks, which have been also adopted for NLP (Cai et al., 2021; Shen et al., 2021). In a similar spirit, Wang et al. (2017) employ a neural model to judge the correctness of the model prediction for SRL and Brantley et al. (2020) learn a policy to decide whether expert querying is required for each state in sequence labeling. Inspired by data maps (Swayamdipta et al., 2020), Zhang and Plank (2021) train a model to select ambiguous instances whose average correct- + +ness over the training iterations is close to a predefined threshold. For machine translation (MT), special techniques can be utilized to seek erroneous instances, such as using a backward translator to check round-trip translations (Haffari et al., 2009; Zeng et al., 2019) or quality estimation (Logacheva and Specia, 2014a,b). + +# 2.2 Representativeness + +Only considering the informativeness of individual instances may have the drawback of sampling bias (Dasgupta, 2011; Prabhu et al., 2019) and the selection of outliers (Roy and McCallum, 2001; Karamcheti et al., 2021). Therefore, representativeness, which measures how instances correlate with each other, is another major factor to consider when designing AL query strategies. + +# 2.2.1 Density + +With the motivation to avoid outliers, density-based strategies prefer instances that are more representative of the unlabeled set. Selecting by $n$ -gram or word counts (Ambati et al., 2010a; Zhao et al., 2020b) can be regarded as a simple way of density measurement. Generally, the common measurement is an instance's average similarity to all other instances (McCallum and Nigam, 1998; Settles and Craven, 2008). While it may be costly to calculate similarities of all instance pairs, considering only $k$ -nearest neighbor instances has been proposed as an alternative option (Zhu et al., 2008c, 2009). + +# 2.2.2 Discriminative + +Another direction is to select instances that are different from already labeled instances. Again, for NLP tasks, simple feature-based metrics can be utilized for this purpose by preferring instances with more unseen $n$ -grams or out-of-vocabulary words (Eck et al., 2005; Bloodgood and Callison-Burch, 2010; Erdmann et al., 2019). Generally, similarity scores can also be utilized to select the instances that are less similar to the labeled set (Kim et al., 2006; Zhang et al., 2018; Zeng et al., 2019). Another interesting idea is to train a model to discriminate the labeled and unlabeled sets. Gissin and Shalev-Shwartz (2019) directly train a classifier for this purpose, while naturally adversarial training can be also adopted (Sinha et al., 2019; Deng et al., 2018). In domain adaptation scenarios, the same + +motivation leads to the usage of a domain separator to filter instances (Rai et al., 2010). + +# 2.2.3 Batch Diversity + +Ideally, only one most useful instance would be selected in each iteration. However, it is more efficient and practical to adopt batch-mode AL (Settles, 2009), where each time a batch of instances is selected. In this case, we need to consider the dissimilarities not only between selected instances and labeled ones but also within the selected batch. + +To select a batch of diverse instances, there are two common approaches. 1) Iterative selection collects the batch in an iterative greedy way (Brinker, 2003; Shen et al., 2004). In each iteration, an instance is selected by comparing it with previously chosen instances to avoid redundancy. Some more advanced diversity-based criteria, like coreset (Geifman and El-Yaniv, 2017; Sener and Savarese, 2018) and determinantal point processes (Shi et al., 2021), can also be approximated in a similar way. 2) Clustering-based methods partition the unlabeled data into clusters and select instances among them (Tang et al., 2002; Xu et al., 2003; Shen et al., 2004; Nguyen and Smeulders, 2004; Zhdanov, 2019; Yu et al., 2022). Since the chosen instances come from different clusters, diversity can be achieved to some extent. + +For the calculation of similarity, in addition to comparing the input features or intermediate neural representations, other methods are also investigated, such as utilizing model-based similarity (Hazra et al., 2021), gradients (Ash et al., 2020; Kim, 2020), and masked LM surprisal embeddings (Yuan et al., 2020). + +# 2.3 Hybrid + +There is no surprise that informativeness and representativeness can be combined for instance querying, leading to hybrid strategies. A simple combination can be used to merge multiple criteria into one. This can be achieved by a weighted sum (Kim et al., 2006; Chen et al., 2011) or multiplication (Settles and Craven, 2008; Zhu et al., 2008c). + +There are several strategies to naturally integrate multiple criteria. Examples include (uncertainty) weighted clustering (Zhdanov, 2019), diverse gradient selection (Ash et al., 2020; Kim, 2020) where the gradients themselves contain uncertainty information (§2.1.3) and determinantal point processes (DPP) with quality-diversity decomposition (Shi et al., 2021). + +Moreover, multi-step querying, which applies multiple criteria in series, is another natural hybrid method. For example, one can consider first filtering certain highly uncertain instances and then performing clustering to select a diverse batch from them (Xu et al., 2003; Shen et al., 2004; Mirroshandel et al., 2011). An alternative strategy of selecting the most uncertain instances per cluster has also been utilized (Tang et al., 2002). + +Instead of statically merging into one query strategy, dynamic combination may better fit the AL learning process, since different strategies may excel at different AL phases. For example, at the start of AL, uncertainty sampling may be unreliable due to little labeled data, and representativeness-based methods could be preferable, whereas in later stages where we have enough data and target finer-grained decision boundaries, uncertainty may be a suitable strategy. DUAL (Donmez et al., 2007) is such a dynamic strategy that can switch from a density-based selector to an uncertainty-based one. Ambati et al. (2011b) further propose GraDUAL, which gradually switches strategies within a switching range. Wu et al. (2017) adopt a similar idea with a pre-defined monotonic function to control the combination weights. + +# 3 Query and Annotation + +# 3.1 AL for Structured Prediction + +AL has been widely studied for classification tasks, while in NLP, many tasks involve structured prediction. In these tasks, the system needs to output a structured object consisting of a group of interdependent variables (Smith, 2011), such as a label sequence or a parse tree. Special care needs to be taken when querying and annotating for these more complex tasks (Thompson et al., 1999). One main decision is whether to annotate full structures for input instances ( $\S 3.1.1$ ), or allow the annotation of only partial structures ( $\S 3.1.2$ ). + +# 3.1.1 Full-structure AL + +First, if we regard the full output structure of an instance as a whole and perform query and annotation at the full-instance level, then AL for structured prediction tasks is not very different than for simpler classification tasks. Nevertheless, considering that the output space is usually exponentially large and infeasible to explicitly enumerate, querying may require further inspection. + +Some uncertainty sampling strategies, such as + +entropy, need to consider the full output space. Instead of the infeasible explicit enumeration, dynamic-programming algorithms that are similar to the ones in decoding and inference processes can be utilized, such as algorithms for tree-entropy (Hwa, 2000, 2004) and sequence-entropy (Mann and McCallum, 2007; Settles and Craven, 2008). + +Instead of considering the full output space, top-k approximation is a simpler alternative that takes $k$ -best predicted structures as a proxy. This is also a frequently utilized method (Tang et al., 2002; Kim et al., 2006; Rocha and Sanchez, 2013). + +For disagreement-based strategies, the measurement of partial disagreement may be required, since full-match can be too strict for structured objects. Fine-grained evaluation scores can be reasonable choices for this purpose, such as F1 score for sequence labeling (Ngai and Yarowsky, 2000). + +Since longer instances usually have larger uncertainties and might be preferred, length normalization is a commonly-used heuristic to avoid this bias (Tang et al., 2002; Hwa, 2000, 2004; Shen et al., 2018). Yet, Settles and Craven (2008) argue that longer sequences should not be discouraged and may contain more information. + +Instead of directly specifying the full utility of an instance, aggregation is also often utilized by gathering utilities of its sub-structures, usually along the factorization of the structured modeling. For example, the sequence uncertainty can be obtained by summing or averaging the uncertainties of all the tokens (Settles and Craven, 2008). Other aggregation methods are also applicable, such as weighted sum by word frequency (Ringger et al., 2007) or using only the most uncertain (least probable) one (Myers and Palmer, 2021; Liu et al., 2022). + +# 3.1.2 Partial-structure AL + +A structured object can be decomposed into smaller sub-structures with different training utilities. For example, in a dependency tree, functional relations are usually easier to judge while prepositional attachment links may be more informative for the learning purpose. This naturally leads to AL with partial structures, where querying and annotating can be performed at the sub-structure level. + +Factorizing full structures into the finest-grained sub-structures and regarding them as the annotation units could be a natural choice. Typical examples include individual tokens for sequence labeling (Marcheggiani and Artières, 2014), word boundaries for segmentation (Neubig et al., 2011; + +Li et al., 2012b), syntactic-unit pairs for dependency parsing (Sassano and Kurohashi, 2010) and mention pairs for coreference (Gasperin, 2009; Miller et al., 2012; Sachan et al., 2015). The querying strategy for the sub-structures can be similar to the classification cases, though inferences are usually needed to calculate marginal probabilities. Moreover, if full structures are desired as annotation outputs, semi-supervised techniques such as self-training (§4.2) could be utilized to assign pseudo labels to the unannotated parts (Tomanek and Hahn, 2009b; Majidi and Crane, 2013). + +At many times, choosing larger sub-structures is preferable, since partial annotation still needs the understanding of larger contexts and frequently jumping among different contexts may require more reading time (§3.2.1). Moreover, increasing the sampling granularity may mitigate the missed class effect, where certain classes may be overlooked (Tomanek et al., 2009). Typical examples of larger sub-structures include sub-sequences for sequence labeling (Shen et al., 2004; Chaudhary et al., 2019; Radmard et al., 2021), word-wise head edges for dependency parsing (Flannery and Mori, 2015; Li et al., 2016), neighborhood pools (Laws et al., 2012) or mention-wise anaphoric links (Li et al., 2020; Espeland et al., 2020) for coreference, and phrases for MT (Bloodgood and Callison-Burch, 2010; Miura et al., 2016; Hu and Neubig, 2021). In addition to increasing granularity, grouping queries can also help to make annotation easier, such as adopting a two-stage selection of choosing uncertain tokens from uncertain sentences (Mirroshandel and Nasr, 2011; Flannery and Mori, 2015) and selecting nearby instances in a row (Miller et al., 2012). + +For AL with partial structures, output modeling is of particular interest since the model needs to learn from partial annotations. If directly using local discriminative models where each substructure is decided independently, learning with partial annotations is straightforward since the annotations are already complete to the models (Neubig et al., 2011; Flannery and Mori, 2015). For more complex models that consider interactions among output sub-structures, such as global models, special algorithms are required to learn from incomplete annotations (Scheffer et al., 2001; Wanvarie et al., 2011; Marcheggiani and Artières, 2014; Li et al., 2016). One advantage of these more complex models is the interaction of the partial labels + +and the remaining parts. For example, considering the output constraints for structured prediction tasks, combining the annotated parts and the constraints may reduce the output space of other parts and thus lower their uncertainties, leading to better queries (Roth and Small, 2006; Sassano and Kurohashi, 2010; Mirroshandel and Nasr, 2011). More generally, the annotation of one label can intermediately influence others with cheap re-inference, which can help batch-mode selection (Marcheggiani and Artières, 2014) and interactive correction (Culotta and McCallum, 2005). + +In addition to classical structured-prediction tasks, classification tasks can also be cast as structured predictions with partial labeling. Partial feedback is an example that is adopted to make the annotating of classification tasks simpler, especially when there are a large number of target labels. For example, annotators may find it much easier to answer yes/no questions (Hu et al., 2019) or rule out negative classes (Lippincott and Van Durme, 2021) than to identify the correct one. + +# 3.2 Annotation Cost + +AL mainly aims to reduce real annotation cost and we discuss several important topics on this point. + +# 3.2.1 Cost Measurement + +Most AL works adopt simple measurements of unit cost, that is, assuming that annotating each instance requires the same cost. Nevertheless, the annotation efforts for different instances may vary (Settles et al., 2008). For example, longer sentences may cost more to annotate than shorter ones. Because of this, many works assume unit costs to tokens instead of sequences, which may still be inaccurate. Especially, AL tends to select difficult and ambiguous instances, which may require more annotation efforts (Hachey et al., 2005; Lynn et al., 2012). It is important to properly measure annotation cost since the measurement directly affects the evaluation of AL algorithms. The comparisons of query strategies may vary if adopting different cost measurement (Haertel et al., 2008a; Bloodgood and Callison-Burch, 2010; Chen et al., 2015). + +Probably the best cost measurement is the actual annotation time (Baldridge and Palmer, 2009). Especially, when the cost comparisons are not that straightforward, such as comparing annotating data against writing rules (Ngai and Yarowsky, 2000) or partial against full annotations ( $\S 3.1$ ; Flannery and Mori, 2015; Li et al., 2016, 2020), time-based + +evaluation is an ideal choice. This requires actual annotating exercises rather than simulations. + +Since cost measurement can also be used for querying (§3.2.2), it would be helpful to be able to predict the real cost before annotating. This can be cast as a regression problem, for which several works learn a linear cost model based on input features (Settles et al., 2008; Ringger et al., 2008; Haertel et al., 2008a; Arora et al., 2009). + +# 3.2.2 Cost-sensitive Querying + +Given the goal of reducing actual cost, the querying strategies should also take it into consideration. That is, we want to select not only high-utility instances but also low-cost ones. A natural cost-sensitive querying strategy is return-on-investment (ROI; Haertel et al., 2008b; Settles et al., 2008; Donmez and Carbonell, 2008). In this strategy, instances with higher net benefit per unit cost are preferred, which is equivalent to dividing the original querying utility by cost measure. Tomanek and Hahn (2010) evaluate the effectiveness of ROI together with two other strategies, including constraining maximal cost budget per instance and weighted rank combination. Haertel et al. (2015) provide further analytic and empirical evaluation, showing that ROI can reduce total cost. + +In real AL scenarios, things can be much more complex. For example, there can be multiple annotators with different expertise (Baldridge and Palmer, 2009; Huang et al., 2017; Cai et al., 2020), and the annotators may refuse to answer or make mistakes (Donmez and Carbonell, 2008). Being aware of these scenarios, Donmez and Carbonell (2008) propose proactive learning to jointly select the optimal oracle and instance. Li et al. (2017) further extend proactive learning to NER tasks. + +# 3.2.3 Directly Reducing Cost + +In addition to better query strategies, there are other ways of directly reducing annotation cost, such as computer-assisted annotation. In AL, models and annotators usually interact in an indirect way where models only query the instances to present to the annotators, while there could be closer interactions. + +Pre-annotation is such an idea, where not only the raw data instances but also the model's best or top- $k$ predictions are sent to the annotators to help them make decisions. If the model's predictions are reasonable, the annotators can simply select or make a few corrections to obtain the gold annotations rather than creating from scratch. + +This method has been shown effective when combined with AL (Baldridge and Osborne, 2004; Vlachos, 2006; Ringger et al., 2008; Skeppstedt, 2013; Canizares-Díaz et al., 2021). Post-editing for MT is also a typical example (Dara et al., 2014). + +Moreover, the models could provide help at real annotating time. For example, Culotta and McCallum (2005) present an interactive AL system where the user's corrections can propagate to the model, which generates new predictions for the user to further refine. Interactive machine translation (IMT) adopts a similar idea, where the annotator corrects the first erroneous character, based on which the model reproduces the prediction. AL has also been combined with IMT to further reduce manual efforts (González-Rubio et al., 2012; Peris and Casacuberta, 2018; Gupta et al., 2021). + +# 3.2.4 Wait Time + +In AL iterations, the annotators may need to wait for the training and querying steps (Line 3 and 4 in Algorithm 1). This wait time may bring some hidden costs, thus more efficient querying and training would be preferable for faster turnarounds. + +To speed up querying, sub-sampling is a simple method to deal with large unlabeled pools (Roy and McCallum, 2001; Ertekin et al., 2007; Tsvigun et al., 2022). For some querying strategies, pre-calculating and caching unchanging information can also help to speed up (Ashrafi Asli et al., 2020; Citovsky et al., 2021). In addition, approximation with $k$ -nearest neighbours can also be utilized to calculate density (Zhu et al., 2009) or search for instances after adversarial attacks (Ru et al., 2020). + +To reduce training time, a seemingly reasonable strategy is to apply incremental training across AL iterations, that is, continuing training previous models on the new instances. However, Ash and Adams (2020) show that this type of warm-start may lead to sub-optimal performance for neural models and many recent AL works usually train models from scratch (Hu et al., 2019; Ein-Dor et al., 2020). Another method is to use an efficient model for querying and a more powerful model for final training. However, this might lead to sub-optimal results, which will be discussed in §4.1. + +Another idea to reduce wait time is to simply allow querying with stale information. Actually, batch-mode AL (§2.2.3) is such an example where instances in the same batch are queried with the same model. Haertel et al. (2010) propose parallel AL, which maintains separate loops of annotating, + +training, and scoring, and allows dynamic and parameterless instance selection at any time. + +# 4 Model and Learning + +# 4.1 Model Mismatch + +While it is natural to adopt the same best-performing model throughout the AL process, there are cases where the query and final (successor) models can mismatch (Lewis and Catlett, 1994). Firstly, more efficient models are preferable for querying to reduce wait time (§3.2.4). Moreover, since data usually outlive models, re-using AL-base data to train another model would be desired (Baldridge and Osborne, 2004; Tomanek et al., 2007). Several works show that model mismatch may make the gains from AL be negligible or even negative (Baldridge and Osborne, 2004; Lowell et al., 2019; Shelmanov et al., 2021), which raises concerns about the utilization of AL in practice. + +For efficiency purposes, distillation can be utilized to improve querying efficiency while keeping reasonable AL performance. Shelmanov et al. (2021) show that using a smaller distilled version of a pre-trained model for querying does not lead to too much performance drop. Tsvigun et al. (2022) combine this idea with pseudo-labeling and sub-sampling to further reduce computational cost. Similarly, Nguyen et al. (2022) keep a smaller proxy model for query and synchronize the proxy with the main model by distillation. + +# 4.2 Learning + +AL can be combined with other advanced learning techniques to further reduce required annotations. + +Semi-supervised learning. Since AL usually assumes an unlabeled pool, semi-supervised learning can be a natural fit. Combining these two is not a new idea: (McCallum and Nigam, 1998) adopt the EM algorithm to estimate the outputs of unlabeled data and utilize them for learning. This type of self-training or pseudo-labeling technique is often utilized in AL (Tomanek and Hahn, 2009b; Majidi and Crane, 2013; Yu et al., 2022). With a similar motivation, (Dasgupta and Ng, 2009) use an unsupervised algorithm to identify the unambiguous instances to train an active learner. For the task of word alignment, which can be learned in an unsupervised manner, incorporating supervision with AL can bring further improvements in a data-efficient way (Ambati et al., 2010b,c). + +Transfer learning. AL can be easily combined with transfer learning, another technique to reduce required annotations. Utilizing pre-trained models is already a good example (Ein-Dor et al., 2020; Yuan et al., 2020; Tamkin et al., 2022) and continual training (Gururangan et al., 2020) can also be applied (Hua and Wang, 2022; Margatina et al., 2022). Moreover, transductive learning is commonly combined with AL by transferring learning signals from different domains (Chan and Ng, 2007; Shi et al., 2008; Rai et al., 2010; Saha et al., 2011; Wu et al., 2017; Kasai et al., 2019; Yuan et al., 2022) or languages (Qian et al., 2014; Fang and Cohn, 2017; Fang et al., 2017; Chaudhary et al., 2019, 2021; Moniz et al., 2022). In addition to the task model, the model-based query policy (\$2.1.4) is also often obtained with transfer learning. + +Weak supervision. AL can also be combined with weakly supervised learning. Examples include learning from inputs and execution results for semantic parsing (Ni et al., 2020), labeling based on identical structure vectors for entity representations (Qian et al., 2020), learning from gazetteers and dictionaries for sequence labeling (Brantley et al., 2020) and interactively discovering labeling rules (Zhang et al., 2022a). + +Data augmentation. Augmentation is also applicable in AL and has been explored with iterative back-translation (Zhao et al., 2020b), mixup for sequence labeling (Zhang et al., 2020) and phrase-to-sentence augmentation for MT (Hu and Neubig, 2021). As discussed in §2.1.1, augmentation can also be helpful for instance querying (Jiang et al., 2020; Zhang et al., 2022b). Another interesting scenario involving augmentation and AL is query synthesis, which directly generates instances to be annotated instead of selecting existing unlabeled ones. Though synthesizing texts is still a hard problem generally, there have been successful applications for simple classification tasks (Schumann and Rehbein, 2019; Quteineh et al., 2020). + +# 5 Starting and Stopping AL + +# 5.1 Starting AL + +While there are cases where there are already enough labeled data to train a reasonable model and AL is utilized to provide further improvements (Bloodgood and Callison-Burch, 2010; Geifman and El-Yaniv, 2017), at many times we are facing the cold-start problem, where instances need to be + +selected without a reasonable model. Especially, how to select the seed data to start the AL process is an interesting question, which may greatly influence the performance in initial AL stages (Tomanek et al., 2009; Horbach and Palmer, 2016). + +Random sampling is probably the most commonly utilized strategy, which is reasonable since it preserves the original data distribution. Some representativeness-based querying strategies (§2.2) can also be utilized, for example, selecting points near the clustering centroids is a way to obtain representative and diverse seeds (Kang et al., 2004; Zhu et al., 2008c; Hu et al., 2010). Moreover, some advanced learning techniques (§4.2) can also be helpful here, such as transfer learning (Wu et al., 2017) and unsupervised methods (Vlachos, 2006; Dasgupta and Ng, 2009). In addition, language model can be a useful tool, with which Dligach and Palmer (2011) select low-probability words in the context of word sense disambiguation and Yuan et al. (2020) choose cluster centers with surprisal embeddings by pre-trained contextualized LMs. + +# 5.2 Stopping AL + +When adopting AL in practice, it would be desirable to know the time to stop AL when the model performance is already near the upper limits, before running out of all the budgets. For this purpose, a stopping criterion is needed, which checks certain metrics satisfying certain conditions. There can be simple heuristics. For example, AL can be stopped when all unlabeled instances are no closer than any of the support vectors with an SVM (Schohn and Cohn, 2000; Ertekin et al., 2007) or no new $n$ -grams remain in the unlabeled set for MT (Bloodgood and Callison-Burch, 2010). Nevertheless, these are specific to the underlying models or target tasks. For the design of a general stopping criterion, there are three main aspects to consider: metric, dataset and condition. + +For the metric, measuring performance on a development set seems a natural option. However, the results would be unstable if this set is too small and it would be impractical to assume a large development set. Cross-validation on the training set also has problems since the labeled data by AL is usually biased. In this case, metrics from the query strategies can be utilized. Examples include uncertainty or confidence (Zhu and Hovy, 2007; Vlachos, 2008), disagreement (Tomanek et al., 2007; Tomanek and Hahn, 2008; Olsson and Tomanek, + +2009), estimated performance (Laws and Schütze, 2008), expected error (Zhu et al., 2008a), confidence variation (Ghayoomi, 2010), as well as actual performance on the selected instances (Zhu and Hovy, 2007). Moreover, comparing the predictions between consecutive AL iterations is another reasonable option (Zhu et al., 2008b; Bloodgood and Vijay-Shanker, 2009a). + +The dataset to calculate the stopping metric requires careful choosing. The results could be unstable if not adopting a proper set (Tomanek and Hahn, 2008). Many works suggest that a separate unlabeled dataset should be utilized (Tomanek and Hahn, 2008; Vlachos, 2008; Bloodgood and VijayShanker, 2009a; Beatty et al., 2019; Kurlandski and Bloodgood, 2022). Since the stopping metrics usually do not rely on gold labels, this dataset could potentially be very large to provide more stable results, though wait time would be another factor to consider in this case (§3.2.4). + +The condition to stop AL is usually comparing the metrics to a pre-defined threshold. Earlier works only look at the metric at the current iteration, for example, stopping if the uncertainty or the error is less than the threshold (Zhu and Hovy, 2007). In this case, the threshold is hard to specify since it relies on the model and the task. (Zhu et al., 2008b) cascade multiple stopping criteria to mitigate this reliance. A more stable option is to track the change of the metrics over several AL iterations, such as stopping when the confidence consistently drops (Vlachos, 2008), the changing rate flattens (Laws and Schütze, 2008) or the predictions stabilize across iterations (Bloodgood and Vijay-Shanker, 2009a; Bloodgood and Grothendieck, 2013). + +Pullar-Strecker et al. (2021) provide an empirical comparison over common stopping criteria and would be a nice reference. Moreover, stopping AL can be closely related to performance prediction and early stopping. Especially, the latter can be of particular interest to AL since learning in early AL stages need to face the low-resource problem and how to perform early stopping may also require careful considerations. + +# 6 Related Topics and Future Directions + +# 6.1 Related Topics + +There are many related topics that could be explored together with AL. Other data-efficient learning methods such as semi-supervised and transfer + +learning are naturally compatible with AL (§4.2). Curriculum learning (Bengio et al., 2009), which arranges training instances in a meaningful order, may also be integrated with AL (Platanios et al., 2019; Zhao et al., 2020a; Jafarpour et al., 2021). Uncertainty (Gawlikowski et al., 2021), outlier detection (Hodge and Austin, 2004) and performance prediction (Xia et al., 2020) can be related to instance querying. Crowdsourcing can be adopted to further reduce annotation cost (§B). Model efficiency (Menghani, 2021) would be crucial to reduce wait time (§3.2.4). AL is a typical type of human-in-the-loop framework (Wang et al., 2021), and it will be interesting to explore more human-computer interaction techniques in AL. + +# 6.2 Future Directions + +Complex tasks. AL is mostly adopted for simple classification, while there are many more complex tasks in NLP. For example, except for MT, generation tasks have been much less thoroughly explored with AL. Tasks with more complex inputs such as NLI and QA also require extra care when using AL; obtaining unlabeled data is already non-trivial. Nevertheless, preliminary work has shown that AL can be helpful for data collection for such tasks (Mussmann et al., 2020). + +Beyond direct target labeling. In addition to directly annotating target labels, AL can also be utilized in other ways to help the target task, such as labeling features or rationales (Melville and Sindhwani, 2009; Druck et al., 2009; Sharma et al., 2015), annotating explanations (Liang et al., 2020), evaluation (Mohankumar and Khapra, 2022) and rule discovery (Zhang et al., 2022a). + +AL in practice. Most AL works simulate annotations on an existing labeled dataset. Though this method is convenient for algorithm development, it ignores several challenges of applying AL in practice. As discussed in this survey, real annotation cost (§3.2.1), efficiency and wait time (§3.2.4), data reuse (§4.1) and starting and stopping (§5) are all important practical aspects which may not emerge in simulation. Moreover, since the AL process usually cannot be repeated multiple times, how to select the query strategy and other hyper-parameters remains a great challenge. It will be critical to address these issues to bring AL into practical use (Rehbein et al., 2010; Attenberg and Provost, 2011; Settles, 2011; Lowell et al., 2019) and make it more widely utilized (Tomanek and Olsson, 2009). + +# Limitations + +There are several limitations of this work. First, we mainly focus on AL works in the context of NLP, while AL works in other fields may also present ideas that could be utilized for NLP tasks. For example, many querying strategies originally developed with CV tasks could be naturally adopted to applications in NLP (Ren et al., 2021). We encourage the readers to refer to other surveys mentioned in §1 for additional related AL works. Moreover, the descriptions in this survey are mostly brief in order to provide a more comprehensive coverage within page limits. We mainly present the works in meaningful structured groups rather than plainly describing them in unstructured sequences, and we hope that this work can serve as an index where more details can be found in corresponding works. Finally, this is a pure survey without any experiments or empirical results. It would be helpful to perform comparative experiments over different AL strategies, which could provide more meaningful guidance (Zhan et al., 2022). We leave this to future work. + +# References + +Charu C Aggarwal, Xiangnan Kong, Quanquan Gu, Jiawei Han, and S Yu Philip. 2014. Active learning: A survey. In Data Classification, pages 599-634. Chapman and Hall/CRC. +Vamshi Ambati, Sanjika Hewavitharana, Stephan Vogel, and Jaime Carbonell. 2011a. Active learning with multiple annotations for comparable data classification task. In Proceedings of the 4th Workshop on Building and Using Comparable Corpora: Comparable Corpora and the Web, pages 69-77, Portland, Oregon. Association for Computational Linguistics. +Vamshi Ambati, Stephan Vogel, and Jaime Carbonell. 2010a. Active learning and crowd-sourcing for machine translation. In Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10), Valletta, Malta. European Language Resources Association (ELRA). +Vamshi Ambati, Stephan Vogel, and Jaime Carbonell. 2010b. Active learning-based elicitation for semi-supervised word alignment. In Proceedings of the ACL 2010 Conference Short Papers, pages 365-370, Uppsala, Sweden. Association for Computational Linguistics. +Vamshi Ambati, Stephan Vogel, and Jaime Carbonell. 2010c. Active semi-supervised learning for improving word alignment. In Proceedings of the NAACL HLT 2010 Workshop on Active Learning for Natural Language Processing, pages 10-17, Los Angeles, + +California. Association for Computational Linguistics. +Vamshi Ambati, Stephan Vogel, and Jaime Carbonell. 2011b. Multi-strategy approaches to active learning for statistical machine translation. In Proceedings of Machine Translation Summit XIII: Papers, Xiamen, China. +Sankaranarayanan Ananthakrishnan, Rohit Prasad, David Stallard, and Prem Natarajan. 2010a. Discriminative sample selection for statistical machine translation. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 626-635, Cambridge, MA. Association for Computational Linguistics. +Sankaranarayanan Ananthakrishnan, Rohit Prasad, David Stallard, and Prem Natarajan. 2010b. A semi-supervised batch-mode active learning strategy for improved statistical machine translation. In Proceedings of the Fourteenth Conference on Computational Natural Language Learning, pages 126-134, Uppsala, Sweden. Association for Computational Linguistics. +Shilpa Arora, Eric Nyberg, and Carolyn P. Rose. 2009. Estimating annotation cost for active learning in a multi-annotator environment. In Proceedings of the NAACL HLT 2009 Workshop on Active Learning for Natural Language Processing, pages 18–26, Boulder, Colorado. Association for Computational Linguistics. +Jordan Ash and Ryan P Adams. 2020. On warm-starting neural network training. Advances in Neural Information Processing Systems, 33:3884-3894. +Jordan T. Ash, Chicheng Zhang, Akshay Krishnamurthy, John Langford, and Alekh Agarwal. 2020. Deep batch active learning by diverse, uncertain gradient lower bounds. In International Conference on Learning Representations. +Seyed Arad Ashrafi Asli, Behnam Sabeti, Zahra Majdabadi, Preni Golazizian, Reza Fahmi, and Omid Momenzadeh. 2020. Optimizing annotation effort using active learning strategies: A sentiment analysis case study in Persian. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 2855-2861, Marseille, France. European Language Resources Association. +Jordi Atserias, Giuseppe Attardi, Maria Simi, and Hugo Zaragoza. 2010. Active learning for building a corpus of questions for parsing. In Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10), Valletta, Malta. European Language Resources Association (ELRA). +Josh Attenberg and Seyeda Ertekin. 2013. Class imbalance and active learning. *Imbalanced Learning: Foundations, Algorithms, and Applications*, pages 101-149. + +Josh Attenberg and Foster Provost. 2011. Inactive learning? difficulties employing active learning in practice. ACM SIGKDD Explorations Newsletter, 12(2):36-41. +Philip Bachman, Alessandro Sordoni, and Adam Trischler. 2017. Learning algorithms for active learning. In International Conference on Machine Learning, pages 301-310. PMLR. +Guirong Bai, Shizhu He, Kang Liu, Jun Zhao, and Zaiqing Nie. 2020. Pre-trained language model based active learning for sentence matching. In Proceedings of the 28th International Conference on Computational Linguistics, pages 1495-1504, Barcelona, Spain (Online). International Committee on Computational Linguistics. +Jason Baldridge and Miles Osborne. 2003. Active learning for HPSG parse selection. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 17-24. +Jason Baldridge and Miles Osborne. 2004. Active learning and the total cost of annotation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 9-16, Barcelona, Spain. Association for Computational Linguistics. +Jason Baldridge and Alexis Palmer. 2009. How well does active learning actually work? Time-based evaluation of cost-reduction strategies for language documentation. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 296-305, Singapore. Association for Computational Linguistics. +Garrett Beatty, Ethan Kochis, and Michael Bloodgood. 2019. The use of unlabeled data versus labeled data for stopping active learning for text classification. In 2019 IEEE 13th International Conference on Semantic Computing (ICSC), pages 287-294. IEEE. +Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. 2009. Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, pages 41-48. +Michael Bloodgood and Chris Callison-Burch. 2010. Bucking the trend: Large-scale cost-focused active learning for statistical machine translation. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 854-864, Uppsala, Sweden. Association for Computational Linguistics. +Michael Bloodgood and John Grothendieck. 2013. Analysis of stopping active learning based on stabilizing predictions. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning, pages 10-19, Sofia, Bulgaria. Association for Computational Linguistics. +Michael Bloodgood and K. Vijay-Shanker. 2009a. A method for stopping active learning based on stabilizing predictions and the need for user-adjustable + +stopping. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL-2009), pages 39-47, Boulder, Colorado. Association for Computational Linguistics. +Michael Bloodgood and K. Vijay-Shanker. 2009b. Taking into account the differences between actively and passively acquired data: The case of active learning with support vector machines for imbalanced datasets. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Short Papers, pages 137-140, Boulder, Colorado. Association for Computational Linguistics. +Kianté Brantley, Amr Sharaf, and Hal Daume III. 2020. Active imitation learning with noisy guidance. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2093-2105, Online. Association for Computational Linguistics. +Klaus Brinker. 2003. Incorporating diversity in active learning with support vector machines. In Proceedings of the 20th International Conference on Machine Learning (ICML-03), pages 59-66. +Tingting Cai, Zhiyuan Ma, Hong Zheng, and Yangming Zhou. 2021. Ne-lp: normalized entropy-and loss prediction-based sampling for active learning in Chinese word segmentation on ehrs. Neural Computing and Applications, 33(19):12535-12549. +Tingting Cai, Yangming Zhou, and Hong Zheng. 2020. Cost-quality adaptive active learning for Chinese clinical named entity recognition. In 2020 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), pages 528-533. IEEE. +Hian Canizares-Diaz, Alejandro Piad-Morffis, Suilan Estevez-Velarde, Yoan Gutierrez, Yudivian Almeida Cruz, Andres Montoyo, and Rafael Muñoz-Guillena. 2021. Active learning for assisted corpus construction: A case study in knowledge discovery from biomedical text. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021), pages 216-225, Held Online. INCOMA Ltd. +Kai Cao, Xiang Li, Miao Fan, and Ralph Grishman. 2015. Improving event detection with active learning. In Proceedings of the International Conference on Recent Advances in Natural Language Processing, pages 72-77, Hissar, Bulgaria. INCOMA Ltd. Shoumen, BULGARIA. +Yee Seng Chan and Hwee Tou Ng. 2007. Domain adaptation with active learning for word sense disambiguation. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 49-56, Prague, Czech Republic. Association for Computational Linguistics. + +Aditi Chaudhary, Antonios Anastasopoulos, Zaid Sheikh, and Graham Neubig. 2021. Reducing confusion in active learning for part-of-speech tagging. Transactions of the Association for Computational Linguistics, 9:1-16. +Aditi Chaudhary, Jiateng Xie, Zaid Sheikh, Graham Neubig, and Jaime Carbonell. 2019. A little annotation does a lot of good: A study in bootstrapping low-resource named entity recognizers. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5164-5174, Hong Kong, China. Association for Computational Linguistics. +Chenhua Chen, Alexis Palmer, and Caroline Sporleder. 2011. Enhancing active learning for semantic role labeling via compressed dependency trees. In Proceedings of 5th International Joint Conference on Natural Language Processing, pages 183-191, Chiang Mai, Thailand. Asian Federation of Natural Language Processing. +Jinying Chen, Andrew Schein, Lyle Ungar, and Martha Palmer. 2006. An empirical study of the behavior of active learning for word sense disambiguation. In Proceedings of the Human Language Technology Conference of the NAACL, Main Conference, pages 120-127, New York City, USA. Association for Computational Linguistics. +Yukun Chen, Thomas A Lasko, Qiaozhu Mei, Joshua C Denny, and Hua Xu. 2015. A study of active learning methods for named entity recognition in clinical text. Journal of biomedical informatics, 58:11-18. +Gui Citovsky, Giulia DeSalvo, Claudio Gentile, Lazaros Karydas, Anand Rajagopalan, Afshin Rostamizadeh, and Sanjiv Kumar. 2021. Batch active learning at scale. Advances in Neural Information Processing Systems, 34. +David Cohn, Les Atlas, and Richard Ladner. 1994. Improving generalization with active learning. Machine learning, 15(2):201-221. +David A Cohn, Zoubin Ghahramani, and Michael I Jordan. 1996. Active learning with statistical models. Journal of artificial intelligence research, 4:129-145. +Aron Culotta and Andrew McCallum. 2005. Reducing labeling effort for structured prediction tasks. In AAAI, volume 5, pages 746-751. +Aswarth Abhilash Dara, Josef van Genabith, Qun Liu, John Judge, and Antonio Toral. 2014. Active learning for post-editing based incrementally retrained MT. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, volume 2: Short Papers, pages 185-189, Gothenburg, Sweden. Association for Computational Linguistics. + +Sajib Dasgupta and Vincent Ng. 2009. Mine the easy, classify the hard: A semi-supervised approach to automatic sentiment classification. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 701-709, Suntec, Singapore. Association for Computational Linguistics. +Sanjoy Dasgupta. 2011. Two faces of active learning. Theoretical computer science, 412(19):1767-1781. +Yue Deng, KaWai Chen, Yilin Shen, and Hongxia Jin. 2018. Adversarial active learning for sequences labeling and generation. In *IJCAI*, pages 4012-4018. +Dmitriy Dligach and Martha Palmer. 2011. Good seed makes a good crop: Accelerating active learning using language modeling. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 6-10, Portland, Oregon, USA. Association for Computational Linguistics. +Pinar Donmez and Jaime G Carbonell. 2008. Proactive learning: cost-sensitive active learning with multiple imperfect oracles. In Proceedings of the 17th ACM conference on Information and knowledge management, pages 619-628. +Pinar Donmez, Jaime G Carbonell, and Paul N Bennett. 2007. Dual strategy active learning. In European Conference on Machine Learning, pages 116-127. Springer. +Gregory Druck, Burr Settles, and Andrew McCallum. 2009. Active learning by labeling features. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 81-90, Singapore. Association for Computational Linguistics. +Long Duong, Hadi Afshar, Dominique Estival, Glen Pink, Philip Cohen, and Mark Johnson. 2018. Active learning for deep semantic parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 43-48, Melbourne, Australia. Association for Computational Linguistics. +Matthias Eck, Stephan Vogel, and Alex Waibel. 2005. Low cost portability for statistical machine translation based on n-gram frequency and TF-IDF. In Proceedings of the Second International Workshop on Spoken Language Translation, Pittsburgh, Pennsylvania, USA. +Liat Ein-Dor, Alon Halfon, Ariel Gera, Eyal Shnarch, Lena Dankin, Leshem Choshen, Marina Danilevsky, Ranit Aharonov, Yoav Katz, and Noam Slonim. 2020. Active Learning for BERT: An Empirical Study. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7949-7962, Online. Association for Computational Linguistics. + +Sean P. Engelson and Ido Dagan. 1996. Minimizing manual annotation cost in supervised training from corpora. In 34th Annual Meeting of the Association for Computational Linguistics, pages 319-326, Santa Cruz, California, USA. Association for Computational Linguistics. +Alexander Erdmann, David Joseph Wrisley, Benjamin Allen, Christopher Brown, Sophie Cohen-Bodenès, Micha Elsner, Yukun Feng, Brian Joseph, Beatrice Joyeux-Prunel, and Marie-Catherine de Marneffe. 2019. Practical, efficient, and customizable active learning for named entity recognition in the digital humanities. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2223-2234, Minneapolis, Minnesota. Association for Computational Linguistics. +Seyda Ertekin, Jian Huang, Leon Bottou, and Lee Giles. 2007. Learning on the border: active learning in imbalanced data classification. In Proceedings of the sixteenth ACM conference on Conference on information and knowledge management, pages 127-136. +Nuno Escudeiro and Alípio Jorge. 2010. D-confidence: An active learning strategy which efficiently identifies small classes. In Proceedings of the NAACL HLT 2010 Workshop on Active Learning for Natural Language Processing, pages 18-26, Los Angeles, California. Association for Computational Linguistics. +Vebjørn Espeland, Beatrice Alex, and Benjamin Bach. 2020. Enhanced labelling in active learning for coreference resolution. In Proceedings of the Third Workshop on Computational Models of Reference, Anaphora and Coreference, pages 111-121, Barcelona, Spain (online). Association for Computational Linguistics. +Meng Fang and Trevor Cohn. 2017. Model transfer for tagging low-resource languages using a bilingual dictionary. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 587-593, Vancouver, Canada. Association for Computational Linguistics. +Meng Fang, Yuan Li, and Trevor Cohn. 2017. Learning how to active learn: A deep reinforcement learning approach. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 595-605, Copenhagen, Denmark. Association for Computational Linguistics. +Meng Fang, Jie Yin, and Dacheng Tao. 2014. Active learning for crowdsourcing using knowledge transfer. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 28. +Daniel Flannery and Shinsuke Mori. 2015. Combining active learning and partial annotation for domain adaptation of a Japanese dependency parser. In Proceedings of the 14th International Conference on + +Parsing Technologies, pages 11-19, Bilbao, Spain. +Association for Computational Linguistics. +Linton C Freeman. 1965. Elementary applied statistics: for students in behavioral science. New York: Wiley. +Lisheng Fu and Ralph Grishman. 2013. An efficient active learning framework for new relation types. In Proceedings of the Sixth International Joint Conference on Natural Language Processing, pages 692-698, Nagoya, Japan. Asian Federation of Natural Language Processing. +Yifan Fu, Xingquan Zhu, and Bin Li. 2013. A survey on instance selection for active learning. Knowledge and information systems, 35(2):249-283. +Atsushi Fujii, Kentaro Inui, Takenobu Tokunaga, and Hozumi Tanaka. 1998. Selective sampling for example-based word sense disambiguation. Computational Linguistics, 24(4):573-597. +Yarin Gal and Zoubin Ghahramani. 2016. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In International Conference on Machine Learning, pages 1050-1059. PMLR. +Yarin Gal, Riashat Islam, and Zoubin Ghahramani. 2017. Deep bayesian active learning with image data. In International Conference on Machine Learning, pages 1183-1192. PMLR. +Caroline Gasperin. 2009. Active learning for anaphora resolution. In Proceedings of the NAACL HLT 2009 Workshop on Active Learning for Natural Language Processing, pages 1-8, Boulder, Colorado. Association for Computational Linguistics. +Jakob Gawlikowski, Cedrique Rovile Njieutcheu Tassi, Mohsin Ali, Jongseok Lee, Matthias Hunt, Jianxiang Feng, Anna Kraspe, Rudolph Triebel, Peter Jung, Ribana Roscher, et al. 2021. A survey of uncertainty in deep neural networks. arXiv preprint arXiv:2107.03342. +Yonatan Geifman and Ran El-Yaniv. 2017. Deep active learning over the long tail. arXiv preprint arXiv:1711.00941. +Masood Ghayoomi. 2010. Using variance as a stopping criterion for active learning of frame assignment. In Proceedings of the NAACL HLT 2010 Workshop on Active Learning for Natural Language Processing, pages 1-9, Los Angeles, California. Association for Computational Linguistics. +Daniel Gissin and Shai Shalev-Shwartz. 2019. Discriminative active learning. arXiv preprint arXiv:1907.06347. +Jesús González-Rubio, Daniel Ortiz-Martínez, and Francisco Casacuberta. 2012. Active learning for interactive machine translation. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 245-254, Avignon, France. Association for Computational Linguistics. + +Daniel Grießhaber, Johannes Maucher, and Ngoc Thang Vu. 2020. Fine-tuning BERT for low-resource natural language understanding via active learning. In Proceedings of the 28th International Conference on Computational Linguistics, pages 1158-1171, Barcelona, Spain (Online). International Committee on Computational Linguistics. +Kamal Gupta, Dhanvanth Boppana, Rejwanul Haque, Asif Ekbal, and Pushpak Bhattacharyya. 2021. Investigating active learning in interactive neural machine translation. In Proceedings of Machine Translation Summit XVIII: Research Track, pages 10-22, Virtual. Association for Machine Translation in the Americas. +Suchin Gururangan, Ana Marasovic, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8342-8360, Online. Association for Computational Linguistics. +Ben Hachey, Beatrice Alex, and Markus Becker. 2005. Investigating the effects of selective sampling on the annotation task. In Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL-2005), pages 144–151, Ann Arbor, Michigan. Association for Computational Linguistics. +Hossein Hadian and Hossein Sameti. 2014. Active learning in noisy conditions for spoken language understanding. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 1081-1090, Dublin, Ireland. Dublin City University and Association for Computational Linguistics. +Robbie Haertel, Paul Felt, Eric K. Ringger, and Kevin Seppi. 2010. Parallel active learning: Eliminating wait time with minimal staleness. In Proceedings of the NAACL HLT 2010 Workshop on Active Learning for Natural Language Processing, pages 33-41, Los Angeles, California. Association for Computational Linguistics. +Robbie Haertel, Eric Ringger, Kevin Seppi, James Carroll, and Peter McClanahan. 2008a. Assessing the costs of sampling methods in active learning for annotation. In Proceedings of ACL-08: HLT, Short Papers, pages 65–68, Columbus, Ohio. Association for Computational Linguistics. +Robbie Haertel, Eric Ringger, Kevin Seppi, and Paul Felt. 2015. An analytic and empirical evaluation of return-on-investment-based active learning. In Proceedings of The 9th Linguistic Annotation Workshop, pages 11-20, Denver, Colorado, USA. Association for Computational Linguistics. +Robbie A Haertel, Kevin D Seppi, Eric K Ringger, and James L Carroll. 2008b. Return on investment for active learning. In Proceedings of the NIPS workshop on cost-sensitive learning, volume 72. + +Gholamreza Haffari, Maxim Roy, and Anoop Sarkar. 2009. Active learning for statistical phrase-based machine translation. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 415-423, Boulder, Colorado. Association for Computational Linguistics. +Gholamreza Haffari and Anoop Sarkar. 2009. Active learning for multilingual statistical machine translation. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 181-189, Suntec, Singapore. Association for Computational Linguistics. +Rishi Hazra, Parag Dutta, Shubham Gupta, Mohammed Abdul Qaathir, and Ambedkar Dukkipati. 2021. Active $^2$ learning: Actively reducing redundancies in active learning methods for sequence tagging and machine translation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1982-1995, Online. Association for Computational Linguistics. +Rui He, Shan He, and Ke Tang. 2021. Multi-domain active learning: A comparative study. arXiv preprint arXiv:2106.13516. +Hideitsu Hino. 2020. Active learning: Problem settings and recent developments. arXiv preprint arXiv:2012.04225. +Victoria Hodge and Jim Austin. 2004. A survey of outlier detection methodologies. Artificial intelligence review, 22(2):85-126. +Andrea Horbach and Alexis Palmer. 2016. Investigating active learning for short-answer scoring. In Proceedings of the 11th Workshop on Innovative Use of NLP for Building Educational Applications, pages 301-311, San Diego, CA. Association for Computational Linguistics. +Neil Houlsby, Ferenc Huszár, Zoubin Ghahramani, and Máté Lengyel. 2011. Bayesian active learning for classification and preference learning. arXiv preprint arXiv:1112.5745. +Junjie Hu and Graham Neubig. 2021. Phrase-level active learning for neural machine translation. In Proceedings of the Sixth Conference on Machine Translation, pages 1087-1099, Online. Association for Computational Linguistics. +Peiyun Hu, Zack Lipton, Anima Anandkumar, and Deva Ramanan. 2019. Active learning with partial feedback. In International Conference on Learning Representations. +Rong Hu, Brian Mac Namee, and Sarah Jane Delany. 2010. Off to a good start: Using clustering to select the initial training set in active learning. In Twenty-Third International FLAIRS Conference. + +Xinyu Hua and Lu Wang. 2022. Efficient argument structure extraction with transfer learning and active learning. In *Findings of the Association for Computational Linguistics: ACL* 2022, pages 423-437, Dublin, Ireland. Association for Computational Linguistics. +Jiaji Huang, Rewon Child, Vinay Rao, Hairong Liu, Sanjeev Satheesh, and Adam Coates. 2016. Active learning for speech recognition: the power of gradients. arXiv preprint arXiv:1612.03226. +Sheng-Jun Huang, Jia-Lve Chen, Xin Mu, and Zhi-Hua Zhou. 2017. Cost-effective active learning from diverse labelers. In *IJCAI*, pages 1879–1885. +Rebecca Hwa. 2000. Sample selection for statistical grammar induction. In 2000 Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora, pages 45-52, Hong Kong, China. Association for Computational Linguistics. +Rebecca Hwa. 2004. Sample selection for statistical parsing. Computational Linguistics, 30(3):253-276. +Fariz Ikhwantri, Samuel Louvan, Kemal Kurniawan, Bagas Abisena, Valdi Rachman, Alfan Farizki Wicaksono, and Rahmad Mahendra. 2018. Multi-task active learning for neural semantic role labeling on low resource conversational corpus. In Proceedings of the Workshop on Deep Learning Approaches for Low-Resource NLP, pages 43-50, Melbourne. Association for Computational Linguistics. +Makoto Imamura, Yasuhiro Takayama, Nobuhiro Kaji, Masashi Toyoda, and Masaru Kitsuregawa. 2009. A combination of active learning and semi-supervised learning starting with positive and unlabeled examples for word sense disambiguation: An empirical study on Japanese web search query. In Proceedings of the ACL-IJCNLP 2009 Conference Short Papers, pages 61-64, Suntec, Singapore. Association for Computational Linguistics. +Borna Jafarpour, Dawn Sepehr, and Nick Pogrebnyakov. 2021. Active curriculum learning. In Proceedings of the First Workshop on Interactive Learning for Natural Language Processing, pages 40-45, Online. Association for Computational Linguistics. +Zhuoren Jiang, Zhe Gao, Yu Duan, Yangyang Kang, Changlong Sun, Qiong Zhang, and Xiaozhong Liu. 2020. Camouflaged Chinese spam content detection with semi-supervised generative active learning. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3080-3085, Online. Association for Computational Linguistics. +Jaeho Kang, Kwang Ryel Ryu, and Hyuk-Chul Kwon. 2004. Using cluster-based sampling to select initial training set for active learning in text classification. In Pacific-Asia conference on knowledge discovery and data mining, pages 384-388. Springer. + +Siddharth Karamcheti, Ranjay Krishna, Li Fei-Fei, and Christopher Manning. 2021. Mind your outliers! investigating the negative impact of outliers on active learning for visual question answering. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 7265-7281, Online. Association for Computational Linguistics. +Jungo Kasai, Kun Qian, Sairam Gurajada, Yunyao Li, and Lucian Popa. 2019. Low-resource deep entity resolution with transfer and active learning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5851-5861, Florence, Italy. Association for Computational Linguistics. +Seokhwan Kim, Yu Song, Kyungduk Kim, Jeong-Won Cha, and Gary Geunbae Lee. 2006. MMR-based active machine learning for bio named entity recognition. In Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers, pages 69-72, New York City, USA. Association for Computational Linguistics. +Yekyung Kim. 2020. Deep active learning for sequence labeling based on diversity and uncertainty in gradient. In Proceedings of the 2nd Workshop on Life-long Learning for Spoken Language Systems, pages 1-8, Suzhou, China. Association for Computational Linguistics. +Omri Koshorek, Gabriel Stanovsky, Yichu Zhou, Vivek Srikumar, and Jonathan Berant. 2019. On the limits of learning to actively learn semantic representations. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 452-462, Hong Kong, China. Association for Computational Linguistics. +Luke Kurlandski and Michael Bloodgood. 2022. Impact of stop sets on stopping active learning for text classification. arXiv preprint arXiv:2201.05460. +Florian Laws, Florian Heimerl, and Hinrich Schütze. 2012. Active learning for coreference resolution. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 508-512, Montreal, Canada. Association for Computational Linguistics. +Florian Laws, Christian Scheible, and Hinrich Schütze. 2011. Active learning with Amazon Mechanical Turk. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1546-1556, Edinburgh, Scotland, UK. Association for Computational Linguistics. +Florian Laws and Hinrich Schütze. 2008. Stopping criteria for active learning of named entity recognition. In Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008), pages 465-472, Manchester, UK. Coling 2008 Organizing Committee. + +Meisin Lee, Lay-Ki Soon, Eu Gene Siew, and Ly Fie Sugianto. 2022. CrudeOilNews: An annotated crude oil news corpus for event extraction. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 465-479, Marseille, France. European Language Resources Association. +David D Lewis and Jason Catlett. 1994. Heterogeneous uncertainty sampling for supervised learning. In Machine learning proceedings 1994, pages 148-156. Elsevier. +David D Lewis and William A Gale. 1994. A sequential algorithm for training text classifiers. In SIGIR'94, pages 3-12. Springer. +Belinda Z. Li, Gabriel Stanovsky, and Luke Zettlemoyer. 2020. Active learning for coreference resolution using discrete annotation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8320-8331, Online. Association for Computational Linguistics. +Maolin Li, Nhung Nguyen, and Sophia Ananiadou. 2017. Proactive learning for named entity recognition. In BioNLP 2017, pages 117-125, Vancouver, Canada., Association for Computational Linguistics. +Shoushan Li, Shengfeng Ju, Guodong Zhou, and Xiaojun Li. 2012a. Active learning for imbalanced sentiment classification. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 139-148, Jeju Island, Korea. Association for Computational Linguistics. +Shoushan Li, Guodong Zhou, and Chu-Ren Huang. 2012b. Active learning for Chinese word segmentation. In Proceedings of COLING 2012: Posters, pages 683-692, Mumbai, India. The COLING 2012 Organizing Committee. +Zhenghua Li, Min Zhang, Yue Zhang, Zhanyi Liu, Wenliang Chen, Hua Wu, and Haifeng Wang. 2016. Active learning for dependency parsing with partial annotation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 344-354, Berlin, Germany. Association for Computational Linguistics. +Weixin Liang, James Zou, and Zhou Yu. 2020. ALICE: Active learning with contrastive natural language explanations. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4380-4391, Online. Association for Computational Linguistics. +Bill Yuchen Lin, Dong-Ho Lee, Frank F. Xu, Ouyu Lan, and Xiang Ren. 2019. AlpacaTag: An active learning-based crowd annotation framework for sequence tagging. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 58-63, Florence, Italy. Association for Computational Linguistics. + +Thomas Lippincott and Ben Van Durme. 2021. Active learning and negative evidence for language identification. In Proceedings of the Second Workshop on Data Science with Human in the Loop: Language Advances, pages 47-51, Online. Association for Computational Linguistics. +Bing Liu, Harrison Scells, Guido Zuccon, Wen Hua, and Genghong Zhao. 2021. ActiveEA: Active learning for neural entity alignment. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3364-3374, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. +Ming Liu, Wray Buntine, and Gholamreza Haffari. 2018a. Learning how to actively learn: A deep imitation learning approach. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1874-1883, Melbourne, Australia. Association for Computational Linguistics. +Ming Liu, Wray Buntine, and Gholamreza Haffari. 2018b. Learning to actively learn neural machine translation. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 334-344, Brussels, Belgium. Association for Computational Linguistics. +Mingyi Liu, Zhiying Tu, Tong Zhang, Tonghua Su, Xiaofei Xu, and Zhongjie Wang. 2022. Ltp: A new active learning strategy for crf-based named entity recognition. Neural Processing Letters, pages 1-22. +Varvara Logacheva and Lucia Specia. 2014a. Confidence-based active learning methods for machine translation. In Proceedings of the EACL 2014 Workshop on Humans and Computer-assisted Translation, pages 78-83, Gothenburg, Sweden. Association for Computational Linguistics. +Varvara Logacheva and Lucia Specia. 2014b. A quality-based active sample selection strategy for statistical machine translation. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 2690-2695, Reykjavik, Iceland. European Language Resources Association (ELRA). +Shayne Longpre, Julia Reisler, Edward Greg Huang, Yi Lu, Andrew Frank, Nikhil Ramesh, and Chris DuBois. 2022. Active learning over multiple domains in natural language tasks. arXiv preprint arXiv:2202.00254. +David Lowell, Zachary C. Lipton, and Byron C. Wallace. 2019. Practical obstacles to deploying active learning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 21-30, Hong Kong, China. Association for Computational Linguistics. + +Teresa Lynn, Jennifer Foster, Mark Dras, and Elaine Ui Dhonnchadha. 2012. Active learning and the Irish treebank. In Proceedings of the Australasian Language Technology Association Workshop 2012, pages 23-32, Dunedin, New Zealand. +François Mairesse, Milica Gašić, Filip Jurčíček, Simon Keizer, Blaise Thomson, Kai Yu, and Steve Young. 2010. Phrase-based statistical language generation using graphical models and active learning. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1552-1561, Uppsala, Sweden. Association for Computational Linguistics. +Saeed Majidi and Gregory Crane. 2013. Active learning for dependency parsing by a committee of parsers. In Proceedings of the 13th International Conference on Parsing Technologies (IWPT 2013), pages 98-105, Nara, Japan. Association for Computational Linguistics. +Cyrielle Mallart, Michel Le Nouy, Guillaume Gravier, and Pascale Sébillot. 2021. Active learning for interactive relation extraction in a French newspaper's articles. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021), pages 886-894, Held Online. INCOMA Ltd. +Gideon Mann and Andrew McCallum. 2007. Efficient computation of entropy gradient for semi-supervised conditional random fields. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Companion Volume, Short Papers, pages 109-112, Rochester, New York. Association for Computational Linguistics. +Diego Marcheggiani and Thierry Artières. 2014. An experimental comparison of active learning strategies for partially labeled sequences. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 898-906, Doha, Qatar. Association for Computational Linguistics. +Katerina Margatina, Loic Barrault, and Nikolaos Aletras. 2022. On the importance of effectively adapting pretrained language models for active learning. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 825-836, Dublin, Ireland. Association for Computational Linguistics. +Katerina Margatina, Giorgos Vernikos, Loic Barrault, and Nikolaos Aletras. 2021. Active learning by acquiring contrastive examples. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 650-663, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. +Héctor Martínez Alonso, Barbara Plank, Anders Johannsen, and Anders Søgaard. 2015. Active learning for sense annotation. In Proceedings of the + +20th Nordic Conference of Computational Linguistics (NODALIDA 2015), pages 245-249, Vilnius, Lithuania. Linkoping University Electronic Press, Sweden. +Andrew McCallum and Kamal Nigam. 1998. Employing em and pool-based active learning for text classification. In Proceedings of the Fifteenth International Conference on Machine Learning, pages 350-358. +Prem Melville and Vikas Sindhwani. 2009. Active dual supervision: Reducing the cost of annotating examples and features. In Proceedings of the NAACL HLT 2009 Workshop on Active Learning for Natural Language Processing, pages 49-57, Boulder, Colorado. Association for Computational Linguistics. +Vânia Mendonça, Ricardo Rei, Luisa Coheur, and Alberto Sardinha. 2022. Onceception: Active learning with expert advice for real world machine translation. arXiv preprint arXiv:2203.04507. +Gaurav Menghani. 2021. Efficient deep learning: A survey on making deep learning models smaller, faster, and better. arXiv preprint arXiv:2106.08962. +Timothy Miller, Dmitriy Dligach, and Guergana Savova. 2012. Active learning for coreference resolution. In BioNLP: Proceedings of the 2012 Workshop on Biomedical Natural Language Processing, pages 73-81, Montreal, Canada. Association for Computational Linguistics. +Seyed Abolghasem Mirroshandel, Gholamreza Ghassem-Sani, and Alexis Nasr. 2011. Active learning strategies for support vector machines, application to temporal relation classification. In Proceedings of 5th International Joint Conference on Natural Language Processing, pages 56-64, Chiang Mai, Thailand. Asian Federation of Natural Language Processing. +Seyed Abolghasem Mirroshandel and Alexis Nasr. 2011. Active learning for dependency parsing using partially annotated sentences. In Proceedings of the 12th International Conference on Parsing Technologies, pages 140-149, Dublin, Ireland. Association for Computational Linguistics. +Akiva Miura, Graham Neubig, Michael Paul, and Satoshi Nakamura. 2016. Selecting syntactic, nonredundant segments in active learning for machine translation. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 20-29, San Diego, California. Association for Computational Linguistics. +Akash Kumar Mohankumar and Mitesh Khapra. 2022. Active evaluation: Efficient NLG evaluation with few pairwise comparisons. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8761-8781, Dublin, Ireland. Association for Computational Linguistics. + +Joel Moniz, Barun Patra, and Matthew Gormley. 2022. On efficiently acquiring annotations for multilingual models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 69-85, Dublin, Ireland. Association for Computational Linguistics. +Ali Mottaghi, Prathusha K Sarma, Xavier Amatriain, Serena Yeung, and Anitha Kannan. 2020. Medical symptom recognition from patient text: An active learning approach for long-tailed multilabel distributions. arXiv preprint arXiv:2011.06874. +Stephen Mussmann, Robin Jia, and Percy Liang. 2020. On the Importance of Adaptive Data Collection for Extremely Imbalanced Pairwise Tasks. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, pages 3400-3413, Online. Association for Computational Linguistics. +Skatje Myers and Martha Palmer. 2021. Tuning deep active learning for semantic role labeling. In Proceedings of the 14th International Conference on Computational Semantics (IWCS), pages 212-221, Groningen, The Netherlands (online). Association for Computational Linguistics. +Graham Neubig, Yosuke Nakata, and Shinsuke Mori. 2011. Pointwise prediction for robust, adaptable Japanese morphological analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 529-533, Portland, Oregon, USA. Association for Computational Linguistics. +Grace Ngai and David Yarowsky. 2000. Rule writing or annotation: Cost-efficient resource usage for base noun phrase chunking. In Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics, pages 117-125, Hong Kong. Association for Computational Linguistics. +Hieu T Nguyen and Arnold Smeulders. 2004. Active learning using pre-clustering. In Proceedings of the twenty-first international conference on Machine learning, page 79. +Minh Van Nguyen, Nghia Ngo, Bonan Min, and Thien Nguyen. 2022. FAMIL: A fast active learning framework for multilingual information extraction. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: System Demonstrations, pages 131-139, Hybrid: Seattle, Washington + Online. Association for Computational Linguistics. +Ansong Ni, Pengcheng Yin, and Graham Neubig. 2020. Merging weak and active supervision for semantic parsing. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 8536-8543. +Fredrik Olsson. 2009. A literature survey of active machine learning in the context of natural language processing. + +Fredrik Olsson and Katrin Tomanek. 2009. An intrinsic stopping criterion for committee-based active learning. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL-2009), pages 138-146, Boulder, Colorado. Association for Computational Linguistics. +Álvaro Peris and Francisco Casacuberta. 2018. Active learning for interactive neural machine translation of data streams. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 151-160, Brussels, Belgium. Association for Computational Linguistics. +Stanislav Peshterliev, John Kearney, Abhyuday Jagannatha, Imre Kiss, and Spyros Matsoukas. 2019. Active learning for new domains in natural language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Industry Papers), pages 90-96, Minneapolis, Minnesota. Association for Computational Linguistics. +Emmanouil Antonios Platanios, Otilia Stretcu, Graham Neubig, Barnabas Poczos, and Tom Mitchell. 2019. Competence-based curriculum learning for neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1162-1172, Minneapolis, Minnesota. Association for Computational Linguistics. +Ameya Prabhu, Charles Dognin, and Maneesh Singh. 2019. Sampling bias in deep active classification: An empirical study. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4058-4068, Hong Kong, China. Association for Computational Linguistics. +Zac Pullar-Strecker, Katharina Dost, Eibe Frank, and Jörg Wicker. 2021. Hitting the target: Stopping active learning at the cost-based optimum. arXiv preprint arXiv:2110.03802. +Kun Qian, Poornima Chozhiyath Raman, Yunyao Li, and Lucian Popa. 2020. Learning structured representations of entity names using Active Learning and weak supervision. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6376-6383, Online. Association for Computational Linguistics. +Longhua Qian, Haotian Hui, Ya'nan Hu, Guodong Zhou, and Qiaoming Zhu. 2014. Bilingual active learning for relation classification via pseudo parallel corpora. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 582-592, Baltimore, Maryland. Association for Computational Linguistics. + +Husam Quteineh, Spyridon Samothrakis, and Richard Sutcliffe. 2020. Textual data augmentation for efficient active learning on tiny datasets. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7400-7410, Online. Association for Computational Linguistics. +Puria Radmard, Yassir Fathullah, and Aldo Lipani. 2021. Subsequence based deep active learning for named entity recognition. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4310-4321, Online. Association for Computational Linguistics. +Piyush Rai, Avishek Saha, Hal Daumé, and Suresh Venkatasubramanian. 2010. Domain adaptation meets active learning. In Proceedings of the NAACL HLT 2010 Workshop on Active Learning for Natural Language Processing, pages 27-32, Los Angeles, California. Association for Computational Linguistics. +Ines Rehbein and Josef Ruppenhofer. 2011. Evaluating the impact of coder errors on active learning. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 43-51, Portland, Oregon, USA. Association for Computational Linguistics. +Ines Rehbein, Josef Ruppenhofer, and Alexis Palmer. 2010. Bringing active learning to life. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), pages 949-957, Beijing, China. Coling 2010 Organizing Committee. +Roi Reichart and Ari Rappoport. 2009. Sample selection for statistical parsers: Cognitively driven algorithms and evaluation measures. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL-2009), pages 3-11, Boulder, Colorado. Association for Computational Linguistics. +Roi Reichart, Katrin Tomanek, Udo Hahn, and Ari Rappoport. 2008. Multi-task active learning for linguistic annotations. In Proceedings of ACL-08: HLT, pages 861-869, Columbus, Ohio. Association for Computational Linguistics. +Pengzhen Ren, Yun Xiao, Xiaojun Chang, Po-Yao Huang, Zhihui Li, Brij B Gupta, Xiaojiang Chen, and Xin Wang. 2021. A survey of deep active learning. ACM Computing Surveys (CSUR), 54(9):1-40. +Eric Ringger, Marc Carmen, Robbie Haertel, Kevin Seppi, Deryle Lonsdale, Peter McClanahan, James Carroll, and Noel Ellison. 2008. Assessing the costs of machine-assisted corpus annotation through a user study. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08), Marrakech, Morocco. European Language Resources Association (ELRA). + +Eric Ringger, Peter McClanahan, Robbie Haertel, George Busby, Marc Carmen, James Carroll, Kevin Seppi, and Deryle Lonsdale. 2007. Active learning for part-of-speech tagging: Accelerating corpus annotation. In Proceedings of the Linguistic Annotation Workshop, pages 101-108, Prague, Czech Republic. Association for Computational Linguistics. +Martha-Alicia Rocha and Joan-Andreu Sanchez. 2013. Towards the supervised machine translation: Real word alignments and translations in a multi-task active learning process. In Proceedings of Machine Translation Summit XIV: Posters, Nice, France. +Dan Roth and Kevin Small. 2006. Margin-based active learning for structured output spaces. In European Conference on Machine Learning, pages 413-424. Springer. +Dan Roth and Kevin Small. 2008. Active learning for pipeline models. In AAAI, pages 683-688. +Guy Rotman and Roi Reichart. 2022. Multi-task active learning for pre-trained transformer-based models. arXiv preprint arXiv:2208.05379. +Nicholas Roy and Andrew McCallum. 2001. Toward optimal active learning through sampling estimation of error reduction. In Proceedings of the Eighteenth International Conference on Machine Learning, pages 441-448. +Dongyu Ru, Jiangtao Feng, Lin Qiu, Hao Zhou, Mingxuan Wang, Weinan Zhang, Yong Yu, and Lei Li. 2020. Active sentence learning by adversarial uncertainty sampling in discrete space. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, pages 4908-4917, Online. Association for Computational Linguistics. +Mrinmaya Sachan, Eduard Hovy, and Eric P Xing. 2015. An active learning approach to coreference resolution. In Twenty-Fourth International Joint Conference on Artificial Intelligence. +Avishek Saha, Piyush Rai, Hal Daumé, Suresh Venkatasubramanian, and Scott L DuVall. 2011. Active supervised domain adaptation. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 97-112. Springer. +Manabu Sassano. 2002. An empirical study of active learning with support vector machines for Japanese word segmentation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 505-512, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. +Manabu Sassano and Sadao Kurohashi. 2010. Using smaller constituents rather than sentences in active learning for Japanese dependency parsing. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 356-365, Uppsala, Sweden. Association for Computational Linguistics. + +Tobias Scheffer, Christian Decomain, and Stefan Wrobel. 2001. Active hidden markov models for information extraction. In International Symposium on Intelligent Data Analysis, pages 309-318. Springer. +Andrew I Schein and Lyle H Ungar. 2007. Active learning for logistic regression: an evaluation. Machine Learning, 68(3):235-265. +Greg Schohn and David Cohn. 2000. Less is more: Active learning with support vector machines. In Proceedings of the Seventeenth International Conference on Machine Learning, pages 839-846. +Christopher Schröder and Andreas Niekler. 2020. A survey of active learning for text classification using deep neural networks. arXiv preprint arXiv:2008.07267. +Christopher Schröder, Andreas Niekler, and Martin Potthast. 2022. Revisiting uncertainty-based query strategies for active learning with transformers. In *Findings of the Association for Computational Linguistics: ACL* 2022, pages 2194-2203, Dublin, Ireland. Association for Computational Linguistics. +Raphael Schumann and Ines Rehbein. 2019. Active learning via membership query synthesis for semi-supervised sentence classification. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 472-481, Hong Kong, China. Association for Computational Linguistics. +Priyanka Sen and Emine Yilmaz. 2020. Uncertainty and traffic-aware active learning for semantic parsing. In Proceedings of the First Workshop on Interactive and Executable Semantic Parsing, pages 12-17, Online. Association for Computational Linguistics. +Ozan Sener and Silvio Savarese. 2018. Active learning for convolutional neural networks: A core-set approach. In International Conference on Learning Representations. +Seungmin Seo, Donghyun Kim, Youbin Ahn, and Kyong-Ho Lee. 2022. Active learning on pre-trained language model with task-independent triplet loss. Proceedings of the AAAI Conference on Artificial Intelligence. +Burr Settles. 2009. Active learning literature survey. +Burr Settles. 2011. From theories to queries: Active learning in practice. In Active learning and experimental design workshop in conjunction with AISTATS 2010, pages 1-18. JMLR Workshop and Conference Proceedings. +Burr Settles and Mark Craven. 2008. An analysis of active learning strategies for sequence labeling tasks. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 1070-1079, Honolulu, Hawaii. Association for Computational Linguistics. + +Burr Settles, Mark Craven, and Lewis Friedland. 2008. Active learning with real annotation costs. In Proceedings of the NIPS workshop on cost-sensitive learning, volume 1. +Burr Settles, Mark Craven, and Soumya Ray. 2007. Multiple-instance active learning. Advances in neural information processing systems, 20. +H Sebastian Seung, Manfred Opper, and Haim Sompolinsky. 1992. Query by committee. In Proceedings of the fifth annual workshop on Computational learning theory, pages 287-294. +Claude Elwood Shannon. 1948. A mathematical theory of communication. The Bell system technical journal, 27(3):379-423. +Manali Sharma, Di Zhuang, and Mustafa Bilgic. 2015. Active learning with rationales for text classification. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 441-451, Denver, Colorado. Association for Computational Linguistics. +Artem Shelmanov, Dmitri Puzyrev, Lyubov Kupriyanova, Denis Belyakov, Daniil Larionov, Nikita Khromov, Olga Kozlova, Ekaterina Artemova, Dmitry V. Dylov, and Alexander Panchenko. 2021. Active learning for sequence tagging with deep pre-trained models and Bayesian uncertainty estimates. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1698-1712, Online. Association for Computational Linguistics. +Dan Shen, Jie Zhang, Jian Su, Guodong Zhou, and Chew-Lim Tan. 2004. Multi-criteria-based active learning for named entity recognition. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04), pages 589-596, Barcelona, Spain. +Shirong Shen, Zhen Li, and Guilin Qi. 2021. Active learning for event extraction with memory-based loss prediction model. arXiv preprint arXiv:2112.03073. +Yanyao Shen, Hyokun Yun, Zachary C. Lipton, Yakov Kronrod, and Animashree Anandkumar. 2018. Deep active learning for named entity recognition. In International Conference on Learning Representations. +Tianze Shi, Adrian Benton, Igor Malioutov, and Ozan Irsoy. 2021. Diversity-aware batch active learning for dependency parsing. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2616-2626, Online. Association for Computational Linguistics. +Xiaoxiao Shi, Wei Fan, and Jiangtao Ren. 2008. Actively transfer domain knowledge. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 342-357. Springer. + +Aditya Siddhant and Zachary C. Lipton. 2018. Deep Bayesian active learning for natural language processing: Results of a large-scale empirical study. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2904-2909, Brussels, Belgium. Association for Computational Linguistics. +Samarth Sinha, Sayna Ebrahimi, and Trevor Darrell. 2019. Variational adversarial active learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 5972-5981. +Maria Skeppstedt. 2013. Annotating named entities in clinical text by combining pre-annotation and active learning. In 51st Annual Meeting of the Association for Computational Linguistics Proceedings of the Student Research Workshop, pages 74–80, Sofia, Bulgaria. Association for Computational Linguistics. +Noah A Smith. 2011. Linguistic structure prediction. Synthesis lectures on human language technologies, 4(2):1-274. +Rion Snow, Brendan O'Connor, Daniel Jurafsky, and Andrew Ng. 2008. Cheap and fast - but is it good? evaluating non-expert annotations for natural language tasks. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 254-263, Honolulu, Hawaii. Association for Computational Linguistics. +Swabha Swayamdipta, Roy Schwartz, Nicholas Lourie, Yizhong Wang, Hannaneh Hajishirzi, Noah A. Smith, and Yejin Choi. 2020. Dataset cartography: Mapping and diagnosing datasets with training dynamics. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9275-9293, Online. Association for Computational Linguistics. +Alex Tamkin, Dat Nguyen, Salil Deshpande, Jesse Mu, and Noah Goodman. 2022. Active learning helps pretrained models learn the intended task. arXiv preprint arXiv:2204.08491. +Min Tang, Xiaoqiang Luo, and Salim Roukos. 2002. Active learning for statistical natural language parsing. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 120-127, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. +Cynthia A Thompson, Mary Elaine Califf, and Raymond J Mooney. 1999. Active learning for natural language parsing and information extraction. In Proceedings of the Sixteenth International Conference on Machine Learning, pages 406-414. +Katrin Tomanek and Udo Hahn. 2008. Approximating learning curves for active-learning-driven annotation. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08), Marrakech, Morocco. European Language Resources Association (ELRA). + +Katrin Tomanek and Udo Hahn. 2009a. Reducing class imbalance during active learning for named entity annotation. In Proceedings of the fifth international conference on Knowledge capture, pages 105-112. +Katrin Tomanek and Udo Hahn. 2009b. Semi-supervised active learning for sequence labeling. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 1039-1047, Suntec, Singapore. Association for Computational Linguistics. +Katrin Tomanek and Udo Hahn. 2010. A comparison of models for cost-sensitive active learning. In *Coling* 2010: Posters, pages 1247–1255, Beijing, China. Coling 2010 Organizing Committee. +Katrin Tomanek, Florian Laws, Udo Hahn, and Hinrich Schütze. 2009. On proper unit selection in active learning: Co-selection effects for named entity recognition. In Proceedings of the NAACL HLT 2009 Workshop on Active Learning for Natural Language Processing, pages 9-17, Boulder, Colorado. Association for Computational Linguistics. +Katrin Tomanek and Fredrik Olsson. 2009. A web survey on the use of active learning to support annotation of text data. In Proceedings of the NAACL HLT 2009 Workshop on Active Learning for Natural Language Processing, pages 45-48, Boulder, Colorado. Association for Computational Linguistics. +Katrin Tomanek, Joachim Wermter, and Udo Hahn. 2007. An approach to text corpus construction which cuts annotation costs and maintains reusability of annotated data. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 486-495, Prague, Czech Republic. Association for Computational Linguistics. +Simon Tong and Daphne Koller. 2001. Support vector machine active learning with applications to text classification. Journal of machine learning research, 2(Nov):45-66. +Akim Tsvigun, Artem Shelmanov, Gleb Kuzmin, Leonid Sanochkin, Daniil Larionov, Gleb Gusev, Manvel Avetisian, and Leonid Zhukov. 2022. Towards computationally feasible deep active learning. In *Findings of the Association for Computational Linguistics: NAACL 2022*, pages 1198-1218, Seattle, United States. Association for Computational Linguistics. +Andreas Vlachos. 2006. Active annotation. In Proceedings of the Workshop on Adaptive Text Extraction and Mining (ATEM 2006). +Andreas Vlachos. 2008. A stopping criterion for active learning. Computer Speech & Language, 22(3):295-312. + +Thuy-Trang Vu, Ming Liu, Dinh Phung, and Gholamreza Haffari. 2019. Learning how to active learn by dreaming. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4091-4101, Florence, Italy. Association for Computational Linguistics. +Chenguang Wang, Laura Chiticariu, and Yunyao Li. 2017. Active learning for black-box semantic role labeling with neural factors. In *IJCAI*. +Zijie J. Wang, Dongjin Choi, Shenyu Xu, and Diyi Yang. 2021. Putting humans in the natural language processing loop: A survey. In Proceedings of the First Workshop on Bridging Human-Computer Interaction and Natural Language Processing, pages 47-52, Online. Association for Computational Linguistics. +Dittaya Wanvarie, Hiroya Takamura, and Manabu Okumura. 2011. Active learning with subsequence sampling strategy for sequence labeling tasks. Information and Media Technologies, 6(3):680-700. +Fangzhao Wu, Yongfeng Huang, and Jun Yan. 2017. Active sentiment domain adaptation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1701-1711, Vancouver, Canada. Association for Computational Linguistics. +Mengzhou Xia, Antonios Anastasopoulos, Ruochen Xu, Yiming Yang, and Graham Neubig. 2020. Predicting performance for natural language processing tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8625-8646, Online. Association for Computational Linguistics. +Min Xiao and Yuhong Guo. 2013. Online active learning for cost sensitive domain adaptation. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning, pages 1-9, Sofia, Bulgaria. Association for Computational Linguistics. +Zhao Xu, Kai Yu, Volker Tresp, Xiaowei Xu, and Jizhi Wang. 2003. Representative sampling for text classification using support vector machines. In European conference on information retrieval, pages 393-407. Springer. +Yan Yan, Romer Rosales, Glenn Fung, and Jennifer G Dy. 2011. Active learning from crowds. In Proceedings of the 28th International Conference on International Conference on Machine Learning, pages 1161-1168. +Donggeun Yoo and In So Kweon. 2019. Learning loss for active learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 93-102. +Yue Yu, Lingkai Kong, Jieyu Zhang, Rongzhi Zhang, and Chao Zhang. 2022. AcTune: Uncertainty-based active self-training for active fine-tuning of pretrained + +language models. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1422-1436, Seattle, United States. Association for Computational Linguistics. +Michelle Yuan, Hsuan-Tien Lin, and Jordan Boyd-Graber. 2020. Cold-start active learning through self-supervised language modeling. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7935-7948, Online. Association for Computational Linguistics. +Michelle Yuan, Patrick Xia, Chandler May, Benjamin Van Durme, and Jordan Boyd-Graber. 2022. Adapting coreference resolution models through active learning. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7533-7549, Dublin, Ireland. Association for Computational Linguistics. +Xiangkai Zeng, Sarthak Garg, Rajen Chatterjee, Udhyakumar Nallasamy, and Matthias Paulik. 2019. Empirical evaluation of active learning techniques for neural MT. In Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019), pages 84–93, Hong Kong, China. Association for Computational Linguistics. +Xueying Zhan, Qingzhong Wang, Kuan-hao Huang, Haoyi Xiong, Dejing Dou, and Antoni B Chan. 2022. A comparative survey of deep active learning. arXiv preprint arXiv:2203.13450. +Mike Zhang and Barbara Plank. 2021. Cartography active learning. In *Findings of the Association for Computational Linguistics: EMNLP* 2021, pages 395-406, Punta Cana, Dominican Republic. Association for Computational Linguistics. +Pei Zhang, Xueying Xu, and Deyi Xiong. 2018. Active learning for neural machine translation. In 2018 International Conference on Asian Language Processing (IALP), pages 153-158. IEEE. +Rongzhi Zhang, Yue Yu, Pranav Shetty, Le Song, and Chao Zhang. 2022a. Prompt-based rule discovery and boosting for interactive weakly-supervised learning. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 745-758, Dublin, Ireland. Association for Computational Linguistics. +Rongzhi Zhang, Yue Yu, and Chao Zhang. 2020. SeqMix: Augmenting active sequence labeling via sequence mixup. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8566-8579, Online. Association for Computational Linguistics. +Shujian Zhang, Chengyue Gong, Xingchao Liu, Pengcheng He, Weizhu Chen, and Mingyuan Zhou. 2022b. ALLSH: Active learning guided by local sensitivity and hardness. In Findings of the Association + +for Computational Linguistics: NAACL 2022, pages 1328-1342, Seattle, United States. Association for Computational Linguistics. +Ye Zhang, Matthew Lease, and Byron Wallace. 2017. Active discriminative text representation learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 31. +Mingjun Zhao, Haijiang Wu, Di Niu, and Xiaoli Wang. 2020a. Reinforced curriculum learning on pretrained neural machine translation models. In Proceedings of the AAAI Conference on Artificial Intelligence, 05, pages 9652-9659. +Shanheng Zhao and Hwee Tou Ng. 2014. Domain adaptation with active learning for coreference resolution. In Proceedings of the 5th International Workshop on Health Text Mining and Information Analysis (Louhi), pages 21-29, Gothenburg, Sweden. Association for Computational Linguistics. +Yuekai Zhao, Haoran Zhang, Shuchang Zhou, and Zhihua Zhang. 2020b. Active learning approaches to enhancing neural machine translation. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, pages 1796–1806, Online. Association for Computational Linguistics. +Yunpeng Zhao, Mattia Prosperi, Tianchen Lyu, Yi Guo, Le Zhou, and Jiang Bian. 2020c. Integrating crowdsourcing and active learning for classification of work-life events from tweets. In International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems, pages 333-344. Springer. +Fedor Zhdanov. 2019. Diverse mini-batch active learning. arXiv preprint arXiv:1901.05954. +Zhong Zhou and Alex Waibel. 2021. Active learning for massively parallel translation of constrained text into low resource languages. In Proceedings of the 4th Workshop on Technologies for MT of Low Resource Languages (LoResMT2021), pages 32-43, Virtual. Association for Machine Translation in the Americas. +Hua Zhu, Wu Ye, Sihan Luo, and Xidong Zhang. 2020. A multitask active learning framework for natural language understanding. In Proceedings of the 28th International Conference on Computational Linguistics, pages 4900-4914, Barcelona, Spain (Online). International Committee on Computational Linguistics. +Jingbo Zhu and Eduard Hovy. 2007. Active learning for word sense disambiguation with methods for addressing the class imbalance problem. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 783-790, Prague, Czech Republic. Association for Computational Linguistics. + +Jingbo Zhu, Huizhen Wang, and Eduard Hovy. 2008a. Learning a stopping criterion for active learning for word sense disambiguation and text classification. In Proceedings of the Third International Joint Conference on Natural Language Processing: Volume-I. +Jingbo Zhu, Huizhen Wang, and Eduard Hovy. 2008b. Multi-criteria-based strategy to stop active learning for data annotation. In Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008), pages 1129-1136, Manchester, UK. Coling 2008 Organizing Committee. +Jingbo Zhu, Huizhen Wang, Benjamin K Tsou, and Matthew Ma. 2009. Active learning with sampling by uncertainty and density for data annotations. IEEE Transactions on audio, speech, and language processing, 18(6):1323-1331. +Jingbo Zhu, Huizhen Wang, Tianshun Yao, and Benjamin K Tsou. 2008c. Active learning with sampling by uncertainty and density for word sense disambiguation and text classification. In Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008), pages 1137-1144, Manchester, UK. Coling 2008 Organizing Committee. + +# A Tasks + +In this section, we list representative works for different NLP tasks. According to the output structures, the tasks are further categorized into four groups: classification, sequence labeling, complex structured prediction, and generation. + +Classification denotes the tasks whose output consists of only one variable. Text classification that assigns a target label to an input text sequence is a typical example. Pairwise classification and word-level classification are also commonly seen in NLP. + +- Text classification: Please refer to the paper table mentioned in (§C) for related works. We do not list them here since there are too many. +- Pairwise classification: (Grießhaber et al., 2020; Bai et al., 2020; Mussmann et al., 2020) +- Word sense disambiguation (WSD): (Fujii et al., 1998; Chen et al., 2006; Chan and Ng, 2007; Zhu and Hovy, 2007; Zhu et al., 2008c; Imamura et al., 2009; Martínez Alonso et al., 2015) + +Sequence labeling is probably the most commonly seen structured prediction task in NLP. It aims to predict a sequence of labels, among which there may be interactions and constraints. + +- Part-of-speech (POS): (Engelson and Dagan, 1996; Ringger et al., 2007; Haertel et al., 2008a; Marcheggiani and Artières, 2014; Fang and Cohn, 2017; Brantley et al., 2020; Chaudhary et al., 2021) +- (Named) entity recognition (NER/ER): (Shen et al., 2004; Culotta and McCallum, 2005; Kim et al., 2006; Settles and Craven, 2008; Tomanek and Hahn, 2009b; Marcheggiani and Artières, 2014; Chen et al., 2015; Li et al., 2017; Shen et al., 2018; Siddhant and Lipton, 2018; Erdmann et al., 2019; Chaudhary et al., 2019; Brantley et al., 2020; Hazra et al., 2021; Shelmanov et al., 2021; Radmard et al., 2021) +- Segmentation: (Ngai and Yarowsky, 2000; Sassano, 2002; Neubig et al., 2011; Li et al., 2012b; Marcheggiani and Artières, 2014; Cai et al., 2021) +- Natural language understanding (NLU): (Hadian and Sameti, 2014; Deng et al., 2018; Peshterliev et al., 2019; Zhu et al., 2020) + +Complex structure prediction in this work denotes the structure prediction tasks that are more complex than sequence labeling, and have explicit connections (alignments) between inputs and outputs. They usually aim to extract relational structures among input elements. + +- Parsing: (Hwa, 2000; Tang et al., 2002; Baldridge and Osborne, 2003, 2004; Hwa, 2004; Reichart and Rappoport, 2009; Sassano and Kurohashi, 2010; Atserias et al., 2010; Mirroshandel and Nasr, 2011; Majidi and Crane, 2013; Flannery and Mori, 2015; Li et al., 2016; Shi et al., 2021) +- Semantic role labeling (SRL): (Roth and Small, 2006; Wang et al., 2017; Ikhwantri et al., 2018; Siddhant and Lipton, 2018; Koshorek et al., 2019; Myers and Palmer, 2021) +- Coreference: (Gasperin, 2009; Miller et al., 2012; Laws et al., 2012; Zhao and Ng, 2014; Sachan et al., 2015; Li et al., 2020; Espeland et al., 2020; Yuan et al., 2022) +- Relation-related: (Roth and Small, 2008; Bloodgood and Vijay-Shanker, 2009b; Mirroshandel et al., 2011; Fu and Grishman, 2013; Canizares-Diaz et al., 2021; Mallart et al., 2021; Seo et al., 2022; Zhang et al., 2022a) +- Event-related: (Cao et al., 2015; Shen et al., 2021; Lee et al., 2022) +- Word alignment: (Ambati et al., 2010b,c; Rocha and Sanchez, 2013) +- Entity alignment/resolution: (Kasai et al., 2019; Liu et al., 2021) + +Generation refers to the tasks that aim to generate a sequence of tokens. We differentiate them from plain structured prediction tasks since there are usually no explicit alignments between input and output sub-parts in the supervision and such alignments are usually implicitly modeled, especially in recent sequence-to-sequence neural models. MT is a typical generation task, where we further separate traditional statistical machine translation (SMT) and recent neural machine translation (NMT). We also include semantic parsing here, since recent works usually cast it as a sequence-to-sequence generation task. + +- SMT: (Eck et al., 2005; Haffari et al., 2009; Haffari and Sarkar, 2009; Ananthakrishnan et al., 2010b; Bloodgood and Callison-Burch, 2010; Ambati et al., 2010a; Ananthakrishnan et al., 2010a; González-Rubio et al., 2012; Rocha and + +Sanchez, 2013; Logacheva and Specia, 2014a,b; Miura et al., 2016) + +- NMT: (Peris and Casacuberta, 2018; Liu et al., 2018b; Zhang et al., 2018; Zeng et al., 2019; Zhao et al., 2020b; Hu and Neubig, 2021; Gupta et al., 2021; Zhou and Waibel, 2021; Hazra et al., 2021; Mendonca et al., 2022) +- Semantic parsing: (Duong et al., 2018; Ni et al., 2020; Sen and Yilmaz, 2020) +- Others: (Mairesse et al., 2010; Deng et al., 2018) + +# B Other Aspects + +We describe some other aspects that are frequently seen when applying AL to NLP. + +Crowdsourcing and Noise. Crowdsourcing is another way to reduce annotation costs by including non-expert annotations (Snow et al., 2008). Naturally, AL and crowdsourcing may also be combined with the hope to further reduce cost (Ambati et al., 2010a; Laws et al., 2011; Yan et al., 2011; Fang et al., 2014; Zhao et al., 2020c). One specific factor to consider in this case is the noises in the crowdsourced data, since noisy data may have a negative impact on the effectiveness of AL (Rehbein and Ruppenhofer, 2011). Cost-sensitive querying strategies (\$3.2.2) can be utilized to select both annotators and instances by estimating labelers' reliability (Yan et al., 2011; Fang et al., 2014). Requiring multiple annotations per instance and then consolidating is also applicable (Laws et al., 2011). Lin et al. (2019) provide a framework that enables automatic crowd consolidation for AL on the tasks of sequence labeling. + +Multiple Targets. In many cases, we may want to consider multiple targets rather than only one, for example, annotating instances in multiple domains (Xiao and Guo, 2013; He et al., 2021; Longpre et al., 2022) or multiple languages (Haffari and Sarkar, 2009; Qian et al., 2014; Moniz et al., 2022). Moreover, there may be multiple target tasks, where multi-task learning (MTL) can interact with AL (Reichart et al., 2008; Ambati et al., 2011a; Rocha and Sanchez, 2013; Ikhwantri et al., 2018; Zhu et al., 2020; Rotman and Reichart, 2022). In these scenarios with multiple targets, naturally, strategies that consider all the targets are usually more preferable. Reichart et al. (2008) show that a query strategy that considers all target tasks obtains the overall best performance for MTL. Moniz et al. (2022) suggest that joint learning across multiple + +languages using a single model outperforms other strategies such as equally dividing budgets or allocating only for a high-resource language and then performing the transfer. + +Data Imbalance. Imbalance is a frequently occurring phenomenon in NLP and AL can have interesting interactions with it. On the one hand, as in plain learning scenarios, AL should take data imbalance into considerations, with modifications to the model (Bloodgood and Vijay-Shanker, 2009b), learning algorithm (Zhu and Hovy, 2007) and query strategies (Tomanek et al., 2009; Escudeiro and Jorge, 2010; Li et al., 2012a). On the other hand, AL can be utilized to address the data imbalance problem and build better data (Ertekin et al., 2007; Tomanek and Hahn, 2009a; Attenberg and Ertekin, 2013; Mottaghi et al., 2020; Mussmann et al., 2020). + +# C Surveying Process + +In this section, we provide more details of our surveying process: + +- For the ACL Anthology, we search for papers with the keyword "active" in titles (by grepping the "Full Anthology BibTeX file") $^{4)}$ . There can be related papers that are missed from this simple keyword search, but as we read along the filtered list, we gradually include the notable missing ones. +- We also include papers outside the ACL Anthology. First, we look for papers by searching with the key phrase "active learning" on Arxiv (in the field of cs.CL, excluding those already appearing in ACL Anthology). Moreover, we also collect related works in other venues, such as AI/ML conferences and journals. For these venues, we do (can) not perform extensive searches due to high volume (and that many are unrelated to our focus on NLP). We mainly collect related papers in these adjacent venues by following the references from the papers already surveyed. + +We also create a table for the related papers (with detailed categorizations), which can be found at this link: https://github.com/zzsfornlp/zmsp/blob/main/msp2/docs/al4nlp/readme.md. \ No newline at end of file diff --git a/asurveyofactivelearningfornaturallanguageprocessing/images.zip b/asurveyofactivelearningfornaturallanguageprocessing/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..c0db231c23288014d71bea575530e9d68b543de5 --- /dev/null +++ b/asurveyofactivelearningfornaturallanguageprocessing/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0404782d687acd89716f486b28de3452cf9eea30be165a0e6eac4a541ac9d757 +size 24267 diff --git a/asurveyofactivelearningfornaturallanguageprocessing/layout.json b/asurveyofactivelearningfornaturallanguageprocessing/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..7c7430e12ca723d7d16ee36d3aa69644cd5042a1 --- /dev/null +++ b/asurveyofactivelearningfornaturallanguageprocessing/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0b343ea61e6029793c94c7438789b5642cbbf33a240cefdbf45a755083c29ea3 +size 767956 diff --git a/asurveyofcomputationalframinganalysisapproaches/e811bf69-c941-4418-a1e5-5cc43c598afa_content_list.json b/asurveyofcomputationalframinganalysisapproaches/e811bf69-c941-4418-a1e5-5cc43c598afa_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..67cf86114a4528f6609d01a14f6535f9efdc4820 --- /dev/null +++ b/asurveyofcomputationalframinganalysisapproaches/e811bf69-c941-4418-a1e5-5cc43c598afa_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0b6b82476719625f400d0257c1736b813dbc79d9ed10da003b393e74e6323f4e +size 90646 diff --git a/asurveyofcomputationalframinganalysisapproaches/e811bf69-c941-4418-a1e5-5cc43c598afa_model.json b/asurveyofcomputationalframinganalysisapproaches/e811bf69-c941-4418-a1e5-5cc43c598afa_model.json new file mode 100644 index 0000000000000000000000000000000000000000..fcbd2f99f8e371c904639a1e78d71041da1d0d61 --- /dev/null +++ b/asurveyofcomputationalframinganalysisapproaches/e811bf69-c941-4418-a1e5-5cc43c598afa_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:67de09a33cc6b97402e89d5ff6a3793f3005bb831581836696c885f66bd3541a +size 111456 diff --git a/asurveyofcomputationalframinganalysisapproaches/e811bf69-c941-4418-a1e5-5cc43c598afa_origin.pdf b/asurveyofcomputationalframinganalysisapproaches/e811bf69-c941-4418-a1e5-5cc43c598afa_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..a4534a8eb5ae579a4aa2eae612f2556b6a1a9d17 --- /dev/null +++ b/asurveyofcomputationalframinganalysisapproaches/e811bf69-c941-4418-a1e5-5cc43c598afa_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e11eaec1472439d8aaf51a3d7223fc6850687e2f698d032070258bc1e84de21c +size 453352 diff --git a/asurveyofcomputationalframinganalysisapproaches/full.md b/asurveyofcomputationalframinganalysisapproaches/full.md new file mode 100644 index 0000000000000000000000000000000000000000..647b6f0041925ea00ce1937dfa9cc9a95c1bffee --- /dev/null +++ b/asurveyofcomputationalframinganalysisapproaches/full.md @@ -0,0 +1,341 @@ +# A Survey of Computational Framing Analysis Approaches + +Mohammad Ali + +College of Information Studies +University of Maryland, College Park +mali24@umd.edu + +Naeemul Hassan + +Philip Merrill College of Journalism +College of Information Studies +University of Maryland, College Park +nhassan@umd.edu + +# Abstract + +Framing analysis is predominantly qualitative and quantitative, examining a small dataset with manual coding. Easy access to digital data in the last two decades prompts scholars in both computation and social sciences to utilize various computational methods to explore frames in large-scale datasets. The growing scholarship, however, lacks a comprehensive understanding and resources of computational framing analysis methods. Aiming to address the gap, this article surveys existing computational framing analysis approaches and puts them together. The research is expected to help scholars and journalists gain a deeper understanding of how frames are being explored computationally, better equip them to analyze frames in large-scale datasets, and, finally, work on advancing methodological approaches. + +# 1 Introduction + +Vaccine hesitancy has long been recognized as a problem despite research evidence favoring the vaccine's effectiveness (Sallam, 2021). Understanding how the vaccination is framed by news media might provide a solution to vaccine hesitancy because a frame determines "how [people] evaluate [a problem] and choose to act upon it" (Entman, 1993, p. 54). Like this, exploration of many other problems (e.g., gun violence) warrants analysis of frames, especially in large-scale datasets in this era. + +Traditionally, researchers explore frames using qualitative and quantitative methods that require manual labor and can handle small amounts of data (D'angelo, 2018; Reese et al., 2001). Production of and easy access to large volumes of digital data in the last two decades prompt scholars to harness the exploration of frames in such big data computationally (Card et al., 2015; Liu et al., 2019; Walter and Ophir, 2019; van Atteveldt and Peng, 2018). + +Prior studies proposed various computational methods (e.g., topic modeling and neural network). + +As the scholarship is growing, a scarcity appeared regarding a comprehensive understanding and resources of computational framing analysis methods (Nicholls and Culpepper, 2021; Sanfilippo et al., 2008). Researchers might be confused with multiple approaches to this analysis, raising questions: how many computational framing analysis methods exist, and which one they should apply? + +To address the problem and help researchers with such questions, we survey existing computational framing analysis approaches and put the methods and relevant resources together. As such, the survey is guided by the following three research questions: + +RQ1. What computational methods do researchers use to explore frames in large-scale datasets? + +RQ2. How do researchers conceptualize a frame in computational framing analysis studies? + +RQ3. How do researchers use computational methods in exploring frames? + +The primary contributions of this article are: a) it provides a comprehensive understanding and resources of existing computational framing analysis methods and puts them together for interested scholars to gain deeper knowledge and start building on that, and b) it adds new thoughts to the ongoing discussion on advancing the computational methods of framing analysis. + +# 2 What is Frame or Framing? + +This section provides a conceptual understanding of framing. A classic example of framing concerns a debate over whether to permit Ku Klux Klan to hold a public rally. One news story with the headline "Ku Klux Klan Tests OSU's Commitment to Free Speech" reported the rally as a free speech issue, while another one with the headline "Possible Ku Klux Klan Rally Raises Safety Concerns" reported it as a disruption of public order. As reflected in the headlines, the two stories used different frames. People who read the free speech news story expressed higher tolerance toward KKK's + +![](images/50b7411827ac4bd4f0420a218f9dd5618ef8d597f6a5c688bf1e0dc1d3ec75ea.jpg) +Figure 1: Framing devices deployed in the headlines of two news reports published by The New York Times and The Guardian on the 2022 Buffalo mass shooting. + +rally compared to those who read the public order news story (Nelson et al., 1997, p. 581). Figure 1 shows similar frames deployed in two news headlines on the 2022 Buffalo mass shooting. + +Scholars are not agreed upon any unified framing definition (Hertog and McLeod, 2001; Van Dijk, 2016). However, a prominent definition, widely used in both traditional and computational framing studies, was provided by Entman (1993). He says: + +To frame is to select some aspects of a perceived reality and make them more salient in a communicating text, in such a way as to promote a particular problem definition, causal interpretation, moral evaluation, and/or treatment recommendation for the item described. (p. 52) + +As per this definition, a frame is largely determined by its outcome effects, such as four functions: a) defining problems, b) diagnosing causes, c) making judgments, and d) suggesting remedies. The functions depend on how some selected aspects of "perceived" reality are made salient. In 2003, he defined it a bit differently, "Framing entails selecting and highlighting some facets of events or issues, and making connections among them so as to promote a particular interpretation, evaluation, and/or solution" (Entman, 2003, p. 417). This definition seems to have made a few shifts, such as from "causal interpretation" to "interpretation," from "moral evaluation" to "evaluation," and from "treatment recommendation" to "solution." The salient aspects are also interconnected. + +While approaching frames as cultural phenomena, Hertog and McLeod (2001) identified a frame as a cultural "[structure] of meaning that includes a set of core concepts and ideas," including "conflicts, metaphors, myths, and narratives" (p. 160). A frame has also been explained as "a central organizing idea... for making sense of relevant events, suggesting what is at issue" (Gamson and Modigliani, 1989, p. 3). Reese et al. (2001) defined a frame from the sociological perspective and focused on six aspects (italicize): "Frames are organizing principles that are socially shared and persistent over time, that work symbolically to meaningfully structure the social world" (p. 11). In a recent definition, D'angelo (2018) defined news framing as "how journalists, their sources, and audiences work within conditions that shape the messages they construct as well as the ways they understand and interpret these messages" (p. xxiv). + +To describe a frame's aspect highlighting some selected facets of an issue or event, Fairhurst (2005) utilized an analogy that "choosing language to frame people's actions and events is like moving a telescope into position" (p. 125). The selected aspects are then coherently organized in a way to make an argument, which finally promotes a particular interpretation, evaluation, and solution. This organization of selected aspects could even be subtle, as framing also "refers to subtle alterations in the statement or presentation of judgment and choice problems" (Iyengar, 1994, p. 11). Another crucial aspect of framing is "to choose one particular meaning (or set of meanings) over another" (Fairhurst and Sarr, 1996, p. 3) that is also supported by Entman (1993), who says a frame "operates by selecting and highlighting some features of reality while omitting others" (p. 53). + +Contexts in Framing. A frame is considered context-sensitive. It is shaped in four locations: i) communicator, ii) texts, iii) receiver, and iv) culture (Entman, 1993). The culture is the stock of commonly invoked frames and explained as (a part of) contexts. A news report's content is fully comprehensible when its contextual information is at the disposal of readers. They interpret a frame and its meaning following contextual information (Baden and D'Angelo, 2018; Tewksbury and Riles, 2018). + +Framing Devices. Framing devices can be defined as tools that are used to make a piece of information more salient, which is, in other words, "making a piece of information more noticeable, + +meaningful, or memorable to audiences" (Entman, 1993, p. 53). While conceptualizing a frame, we accumulated framing devices (see Table 1). To make the list concise and convenient, we combined similar devices and put them into four groups: a) content, b) action, c) context, and d) communicator. The devices or tools can be used to provide either higher or lower salience to selected aspects of reality. In some cases, multiple devices can be applied together as a new device. For example, jargon, metaphors, and contrast can together be used to develop a "story" (Fairhurst and Sarr, 1996). + +![](images/b6a6a66d3c5049b7b19f7b4318fd509d5b182bc28aefafa57fa5616def96e755.jpg) +Figure 2: Summary of the Paper Selection Method + +# 3 Method + +We utilized three ways to identify and select relevant articles for a comprehensive understanding of computational framing analysis methods. First, we searched on Scopus, an abstract and citation database of Elsevier, using relevant keywords: ("computational framing analysis" OR "computational frames analysis" OR ("frame analysis" OR "framing analysis") AND "computational"). It provides 95 articles in the English language. We manually read their abstracts and sorted out 13 articles relating to computational framing analysis. In the sorting process, we read the articles' method sections if needed to make the decision. Other 82 articles were excluded due to their irrelevance. The excluded articles were related to "frames" in other fields, such as building structures (e.g., 2D plane frames) and mechanical engineering. Second, we searched on Google Scholar using the exact key + +words and included articles until the third page as no relevant article was found on the third page. This gave us ten relevant articles. Six articles were common in both the Scopus and the Google Scholar searches, resulting in 17 unique articles from both sources. Third, while reading through the 17 selected articles, we tracked down 20 more relevant articles cited in some of those articles. The 20 articles did not appear in the Scopus and Google Scholar searches probably because of the different keywords and phrases used in their titles and abstracts. + +Finally, we got a total of 37 articles selected for this survey (see Figure 2). The articles involve journals and conferences in both computation and social science disciplines. Reading through the articles and their supplemental materials (e.g., coding schema guiding the annotation), if any, we utilized an inductive way to scrutinize various aspects, including a) framing conceptualization, b) functions of computational framing analysis approaches, and c) results and their interpretation. We reported available datasets, codes, and other relevant resources, if any. + +# 4 Analysis + +This section presents an analysis of the selected articles in two broad parts. The first part answers RQ1, and the second part answers RQ2 and RQ3. Table 2 summarizes the articles, identified approaches, codebook, corpora, domains, and resources. + +Codebook, Corpora, & Approaches (RQ1). Analysis of the articles identified at least nine approaches and three major coding schema and annotated corpora for computational framing analysis. The approaches are in the categories of supervised, unsupervised, and mixed methods. A supervised method usually needs an annotated subset of data. Here, the model is first trained on a labeled dataset (training data) and then applied in a new similar dataset (test data) to classify or predict each instance (Kotsiantis et al., 2007). In contrast, an unsupervised method does not need any pre-annotated datasets. Instead, it explores all unlabeled data. + +Conceptualization & Functions (RQ2 & RQ3). As a way of answering RQ2 and RQ3, we explore how researchers conceptualize frames and utilize computational methods in analyzing frames in each approach, codebook, and corpora. + +# 4.1 Codebook & Corpora + +# 4.1.1 Policy Frames Codebook + +Boydstun et al. (2013) and Boydstun et al. (2014) proposed a codebook named "policy frames codebook" (PFC). The PFC consists of 14 categories of "frame dimensions" and an "other" category. The dimensions include "economic frames," "capacity and resources frames," "morality frames," etc. For example, a news report is labeled as an economic frame if it focuses on "the costs, benefits, or monetary/financial implications of the issue (to an individual, family, community, or to the economy as a whole)" (Boydstun et al., 2014, p. 6). + +They developed the codebook through brainstorming and iteration of applying it to random texts. With the codebook, they deployed 3,033 coders to manually code three sets of articles on immigration, tobacco, and same-sex marriage. Using the labeled documents, they finally developed a logistic regression binary text classifiers (i.e., present or absent) (Boydstun et al., 2013, 2014). + +# 4.1.2 Media Frames Corpus + +Using PFC, Card et al. (2015) offered a manually-annotated corpus of news reports named "media frames corpus" (MFC). The news reports were collected from three domains: immigration, smoking, and same-sex marriage. The MFC was applied in other studies (e.g., Field et al., 2018). Card et al. (2015) annotated the three datasets based on PFC's 15 framing dimensions (Boydstun et al., 2013). The authors, however, did not apply the annotations to any new datasets. In 2016, they added four more categories—pro, neutral, anti, and irrelevant. + +Conceptualization in PFC & MFC. Boydstun et al. (2013, 2014) conceptualized framing by resorting to the widely used framing definition of Entman (1993). Overall, they put "language" at the center of identifying and analyzing frames. PFC's development is motivated by three framing concepts: a) frame selection varies based on various situations, b) frames evolve over time, and c) frames spread across issues, geographic locations, and institutions or organizations. Card et al. (2015) also used Entman (1993)'s definition in conceptualizing frames. They focused on some framing elements that work coherently as a framing package. + +Review. The authors conceptualized frames with existing framing definitions. However, framing aspects they mentioned (e.g., Entman, 1993) were not utilized in developing the 15 "framing + +dimensions." Considering the development process and broader definitions of each frame, the 15 dimensions seem to be more fit with "topics," not frames. As per the framing theory, the categorization of these dimensions looks arbitrary and too broad to understand a frame's nuances. For example, a text is identified as an "economic frame" if it focuses on anything of the whole economy. Let's consider the Ku Klux Klan's example mentioned above. As per MFC's 15 dimensions, both KKK news reports could probably be identified as a "law and order, crime and justice frame" under the PFC. Here, it does not answer the "how" question at all. The dimensions, however, can be considered as topics. The MFC corpus inherited the same limitations as it was developed using the PFC codebook. + +# 4.1.3 Gun Violence Frame Corpus (GVFC) + +This article identified another annotated corpus named "Gun Violence Frame Corpus" (GVFC). It was applied in neural network-based models discussed later. In this dataset, the authors manually annotated 1,300 news headlines collected from 21 U.S. news media outlets. Using nine pre-defined codes drawn from literature, multiple coders annotated the headlines. Finally, they used a BERT model and made a frame prediction classifier. Its overall accuracy is 84.23 + +Conceptualization. Liu et al. (2019) used Entman (1993)'s prominent definition to conceptualize framing. They highlighted various ways of constructing frames, such as word choice and labeling by journalists "to promote a certain side" (p. 504). The authors also focused on generic versus issue-specific frames. In terms of manual codes, they applied a deductive approach—first defining some frames and then manually labeling news articles into those pre-defined frames. + +Review. The article briefly conceptualized a frame and included the aspects of widely-used framing definition (e.g., Entman, 1993). However, all the framing codes in GVFC were not defined following how the framing was conceptualized. For example, a code was defined in the category of politics "... as long as [a] news headline mentions a politician's name" which seems not aligned with the nuances of their conceptualizations. + +# 4.2 Computational Approaches + +# 4.2.1 Topic Modeling + +Various prior studies utilized topic modeling (TM) to explore frames (e.g. DiMaggio et al., 2013). + +Method. The TM algorithm discovers latent themes in a large collection of documents (Blei, 2012). A topic is a probability distribution over a fixed vocabulary (p. 78). The algorithm produces a number $(k)$ of lists of words based on the words' higher probability of being in a list. Each list of words is considered to be a topic, and each topic has a different probability distribution. The latent Dirichlet allocation (LDA) topic model provides an assignment of each document to the topic(s). As a mixed-membership model, each of its documents may be assigned to multiple topics, considering that a document could have elements of multiple topics. DiMaggio et al. (2013) used the LDA topic modeling to explore frames. They view each topic as a frame, saying that a topic "includes terms that call attention to particular ways" (p. 593). + +Conceptualization. In the study of DiMaggio et al. (2013), they conceptualized a frame as "a set of discursive cues (e.g., words, images, and narrative) that suggests a particular interpretation of a person, event, organization, practice, condition, or situation" (p. 593). They cited Gamson et al. (1992)'s definition that a frame is "a central organizing principle that holds together and gives coherence and meaning to a diverse array of symbols." They considered each topic as a frame. + +Review. Here, the conceptualization of a frame looks consistent with the overall framing idea. However, the topic model's output (i.e., lists of words) and their interpretation seem not aligned with framing aspects. A list of words in the topic model comes without any connection among them due to its features (e.g., bag-of-words). The interpretation of each word list in DiMaggio et al. (2013) also indicates it as a theme or issue, not a frame. For example, they reported the results by utilizing words like "highlight," "emphasize," and "concerned with" (e.g., this topic highlights legislative actions). Framing nuances like a problem and causal interpretation could not be extracted here. + +# 4.2.2 Structural Topic Modeling (STM) + +Method. The STM model was also used to explore frames (e.g., Roberts et al., 2014). Compared to LDA topic modeling (Blei, 2012), STM allows including metadata or covariates in the model. With metadata (e.g., political ideology and time) added to the dataset and model, the STM allows researchers to interpret how the topics are associated with those metadata. For example, in terms of political ideology, such as conservatives and lib- + +erals, researchers might identify a topic as more aligned with conservatives and another topic with liberals. Metadata can also be used in predicting the topics' prevalence by metadata (Gilardi et al., 2021; Nicholls and Culpepper, 2021). + +In their study exploring topics in a corpus of newspaper texts, Gilardi et al. (2021) used some covariates, including time. Their results show how the topics are distributed over time across various states in the U.S. Since the authors followed DiMaggio et al. (2013)'s argument of considering a topic as a frame, their results' interpretation also focuses on themes or topics, instead of frames. + +Conceptualization. Gilardi et al. (2021) conceptualized a frame with Gamson et al. (1992) definition that a frame can be understood as a "storyline or unfolding narrative about an issue" (p 385). In terms of exploring frames by STM, Gilardi et al. (2021) relied on DiMaggio et al. (2013) argument that topics identified through TM can be viewed as frames. + +Review. Like the topic modeling approach (Gilardi et al., 2021), the STM algorithm is also constrained by considering a topic as a frame. So, the STM contains similar limitations in terms of framing analysis. Compared to topic modeling, the STM offers additional insights into the topics or themes through the analysis of covariates. Both methods are based on the bag-of-words idea, indicating the lack of semantic contextualization needed for exploring frames. + +# 4.2.3 Hierarchical Topic Modeling + +Method. Studies also used hierarchical topic modeling (HTM) to explore frames. Nguyen (2015) and Nguyen et al. (2015) introduced an HTM model named "Supervised Hierarchical Latent Dirichlet Allocation (SHLDA)" that aims to analyze frames in a large dataset. As the SHLDA works, each document in the corpus is associated with a continuous level of scores (e.g., conservative vs. liberal ideology). It produces a hierarchy of topics, where the first-level nodes are considered agendas and the second-level nodes as frames. Documents' scores help explain how the topics are framed concerning respective people's positions. Its document generative process combines the hierarchical LDA and hierarchical Dirichlet process (HDP). The authors applied it to three datasets and conducted qualitative and quantitative analyses to validate the models' agenda and frames. + +Conceptualization. Nguyen (2015) also used + +the framing definition of Entman (1993) in conceptualizing a frame. However, unlike Gilardi et al. (2021), Nguyen (2015) considered a topic as an agenda (e.g., what topics are talked about) and a sub-topic as a second-level agenda or a frame (e.g., how these topics are talked about). + +Review. As elaborated above, the SHLDA is one step ahead of topic modeling. However, a crucial incongruity remains in how they conceptualized a frame (e.g., sub-topics) and interpreted the results. Though there is a lack of unified framing definition, the idea of considering a sub-topic as a frame does not align with traditional framing conceptualization (Entman, 1993; McCombs et al., 1997; Ghanem, 1997). Like many prior framing studies, the SHLDA output might also be considered as simply topics and their relevant attributes, not frames. Moreover, Nguyen (2015)'s qualitative analysis to validate the output as frames is not systematically executed, and the presentation of its results does not illustrate any framing aspects (Entman, 1993) + +# 4.2.4 Cluster Analysis + +Method. The $k$ -means clustering algorithm is another unsupervised approach used to explore frames. Burscher et al. (2016) conducted two $k$ -means clustering in a dataset. One includes all words, and another includes selected words (i.e., nouns, adjectives, and adverbs). After creating document vectors with TFIDF in both groups, they conducted $k$ -means clustering to find clusters. As a centroid-based clustering approach works, a certain number of clusters $(k)$ is specified in advance, and each cluster is represented by its center. They select the number of clusters $(k)$ using the "elbow method." Each document is assigned to a cluster based on its relatively closer distance to that cluster center (Burscher et al., 2016). Unlike topic modeling, $k$ -means clustering is a single-membership approach where each document generally belongs to one cluster. + +Conceptualization. Burscher et al. (2016) conceptualized a frame in terms of "word frequencies" and mentioned words as highly reliable and less biased in producing frames. They "used word frequencies as features [of a frame] in [their] cluster analyses" (p. 533). They utilized traditional framing definition partially (e.g., presence or absence of certain keywords, stock phrases) (Entman, 1993). + +Review. As Burscher et al. (2016) conceptualized and interpreted frames in terms of word frequencies and co-occurrences, the framing devices + +listed in Table 1 suggest that word(s) are simply one of the many devices to construct a frame. They utilized such conceptualization that does not help explore frames despite their acknowledgment that "based on plain word features, a cluster analysis cannot reveal complex semantic and logical relationships like causality" (Burscher et al., 2016, p. 541). As a single-membership approach, this method is also against one of the core framing ideas that a framing device may belong to multiple frames. The results were presented with words, including "refer to." For example, "cluster B5 refers to nuclear power ... in Iran" (p. 439). The results indicate these as a topic or issue. It does not indicate "how" the "nuclear" issue was discussed and evaluated as a problem. Both conceptualization and output seem to illustrate certain topics, not frames. + +# 4.2.5 Neural Network Model + +Method. Some studies utilized the neural network approach to build frame-identifying classifiers and analyzed frames in various text documents (e.g., news reports and tweets). Mainly, two annotated datasets namely, MFC and GVFC, were used in building these models. + +MFC was utilized in a number of such studies, including probabilistic soft logic (PSL) (Johnson et al., 2017), LSTM neural network (Naderi and Hirst, 2017), recursive neural network (Ji and Smith, 2017), and transformer-based language models such as BERT and RoBERTa (Khanehzar et al., 2019; Cabot et al., 2020; Mendelsohn et al., 2021). Some studies used MFC's annotated news reports partially and some used the full corpus. + +Manually annotating the GVFC dataset, Liu et al. (2019) used it to build a classifier using BERT. It was later applied in other studies (e.g., Akyurek et al., 2020; Tourni et al., 2021; Bhatia et al., 2021). + +Conceptualization. As mentioned above, Liu et al. (2019) used traditional framing definitions (e.g., Entman, 1993) while conceptualizing a frame. The studies applying MFC in building a neural network-based classifier also conceptualize it by drawing works from prior studies in both social and computational science. + +Review. In terms of the approach, both groups of studies seem to have applied the state-of-the-art pre-trained models based on transfer learning that looks promising for advancing computational framing analysis. However, the quality of the annotated training dataset appears not up to the mark, which is reflected in the lack of results interpreta + +tion in those studies. As reviewed above, the MFC dataset seems more about categorizing a text into broad topics (e.g., "economic frames"), not frames. The subsequent studies applying MFC dataset also did not adequately justify MFC's 15 dimensions as frames. Their results mainly focused on the accuracy of the model built on MFC training dataset, but not whether the results provide framing nuances. + +Compared to MFC, GVFC's annotations look more coherent but still lack in capturing framing nuances, as mentioned above in sub-section 4.1.3. For example, based on GVFC's "politics" code, Liu et al. (2019) interpreted its result saying, "it appears that news media of all types have largely politicized the gun violence issue right after each major mass shooting" (p. 511). Here, the politi-cization result and its interpretation do not align with how the code is defined. The results might indicate the texts "discussed" "a politician" or politics, which is a simple topic or an issue, not any major framing element like problem definition and its coherent argument. + +# 4.2.6 Parsing Semantic Relations + +Another line of computational framing analysis relates to the exploration of semantic relations, going beyond the bag-of-word model. + +Method. Sturdza et al. (2018) operationalized Entman (1993)'s four framing elements as their semantic relations in texts. This approach proposed the utilization of a rule-based system that uses existing computational software, such as TurboParser, and implicature rules. Using the parser, the author proposed identifying syntactic structures in texts and then using a set of rules to transform the syntactic structure into semantic networks. The networks determine the semantic roles of each word (e.g., actors, events) through a set of sentiment analysis implicature rules using a sentiment lexicon. + +On the other hand, Ziems and Yang (2021) computationally parsed various attributes (e.g., race) of police shooting victims in news reports and explored how differently they are portrayed in news media. They called it "entity-centric framing." A recent study by Yu (2022) looked at iterative adverbs (e.g., again) in the political discourse considering the adverbs evoke different attitudinal subtexts. After extracting sentences with relevant adverbs, the author grouped the sentences through $k$ -means clustering and identified the most representative keywords in each cluster by a keyword mining tool. + +Conceptualization. In conceptualizing a frame, + +Sturdza et al. (2018) relied on four framing elements of Entman (1993, p. 52). However, two other studies lack adequate conceptualization of framing. For instance, Ziems and Yang (2021) mainly explored "entity-centric" frames but did not elaborate on it from existing literature. + +Review. Compared to the topic modeling method, this approach looks innovative in terms of understanding semantic relations between words and phrases. However, the idea seems not adequately exploited in understanding the nuances of frames. For example, Sturdza et al. (2018) did not apply the operationalization in a practical dataset. Results of Ziems and Yang (2021) reported frequency and correlations while Yu (2022)'s results ended up with clustering and keywords, instead of exploring the coherent argument and relations among various framing devices. However, by its design, the semantic relations approach holds the potential for being used in advancing the computational methods of framing analysis. + +# 4.2.7 Frequency-based Model + +Method. This model proposed using QDA Miner and its affiliated WordStat program to extract words, and phrases, and examine their repetitions across the corpus (Kang and Yang, 2022). In this model, Sanderink (2020) proposed little changes, which is to first determine certain frames (e.g., energy security) by reviewing prior scholarship. Researchers then prepare a codebook using QDA Miner. The codebook comprises words, phrases, and rules that capture various elements relating to each of the pre-determined frames. Finally, WordStat was used to calculate the frequency of words and phrases relating to each frame. + +Conceptualization. Scholars in this approach defined a frame in terms of word recurrence in a document. They also highlighted the ways of editing, interpreting, organizing, and presenting information for particular news content to be framed. They compared a frame with a theme. + +Review. The frame was not appropriately conceptualized here, as per the existing framing definitions (e.g., Entman, 1993). The consideration of only the frequency of words does not compromise the coherent meanings of frames. + +# 4.2.8 FrameAxis + +Method. FramAxis model explores "microframes," which is operationalized as a pair of antonyms, such as legal versus illegal and fast versus slow. + +The antonyms are obtained from WordNet. Then, the authors compute the bias of each microframe (average contribution of all words in a document to the microframe) and the intensity of each microframe (how strongly it is presented in documents). The microframes are analyzed along with the agent-object-action patterns identified by the semantic role labeling (SRL) model in the corpus. + +Conceptualization. A frame in this approach was conceptualized utilizing features of existing definitions. For example, they highlighted presenting some selected aspects of an issue and making them more salient, which aims to promote certain values, interpretations, or solutions. + +Review. Though the framing conceptualization is derived from prominent framing definitions, the core aspect of FrameAxis is the pair of antonyms, which again limits the coherent argument, problem definition, and other framing elements. + +# 4.2.9 Analysis of Topic Model Networks + +Walter and Ophir (2019) proposed this mixed method approach, "Analysis of Topic Model Networks" (ANTMN), that combines topic modeling and semantic network analysis. It was applied in other studies (e.g., Ophir et al., 2021). + +Method. ANTMN includes three steps. First, the authors apply LDA topic modeling (Blei, 2012) to the dataset. They label each topic by qualitatively examining three types of information: a) words with the highest loading over each topic, b) prevalent and exclusive words in each topic, and c) full documents that are the most representative of each topic. Second, ANTMN creates a semantic network, where the topics serve as nodes, and topics' similarity relationships serve as edges. The relationship is calculated based on the topics' cooccurrence in the documents. The output provides a fully connected, undirected, and weighted network. Finally, a community detection algorithm was used to cluster the topics into various communities in the network based on the topics' prevalence in similar documents (Walter and Ophir, 2019). + +Conceptualization. As the authors noted, ANTMN can analyze emphasis frames (e.g., highlighting one side), not equivalency frames (e.g., gain vs. loss issue). They conceptualize a frame as "a community[y] in a network of topics" (p. 248), based on linguistic patterns. Borrowing van Atteveldt and Peng (2018)'s idea of arranging various framing devices around an overarching idea (e.g., a cluster of relevant framing devices), they con + +sider each topic in topic modeling as a framing device. The cluster of topics was named as a frame in ANTMN. They embraced the patterns of a frame that "repeatedly invokes the same objects and traits, using identical or synonymous words and symbols in a series of similar communications that are concentrated in time" (Entman et al., 2009, p. 177). + +Review. A few things seem to have restricted ANTMN as a framing analysis model. As per the framing conceptualization, the topics (aka framing devices) under each network community need to be coherently connected with each other to render a coherent framing argument. The authors did not explain how the devices are coherently interconnected. This lack is reflected in the interpretation of the results. For instance, they reported a framing result, saying that "the largest community on the right consisted of topics about the cultural and economic consequences.... Articles dominated by these topics portrayed the impact of diseases on the economy at large.... (Walter and Ophir, 2019, p. 259). Here, the authors mentioned topics' names and what these topics portray with words like "consists of" and "portrayed." The results did not provide a coherent argument of the problem or how one aspect is interconnected with another. Though the output demonstrated some topics, the authors' claim of the communities as frames is not supported with adequate evidence. + +Despite the authors' claim of this method as unsupervised, manual human labor is still needed in at least two places: a) an examination of words and documents to label topics and b) an interpretation of findings. However, no systematic method was provided for executing the manual analysis. + +# 5 Discussion and Conclusion + +In this article, we surveyed 37 empirical studies and reported on nine computational approaches, three coding schema, and annotated corpora of how they conceptualize frames and utilize various computational methods to explore frames in large-scale datasets. Overall, existing methods and relevant resources are put together in this article. In the absence of a comprehensive understanding and resources of computational framing analysis methods, this article's insights will benefit framing scholars, especially those who are new, to gain deeper knowledge in this single article and build on that in further exploring frames in big data. + +Algorithmic Functions. As demonstrated + +above, most algorithms used in computational framing analysis were not originally built for this purpose. For example, LDA topic modeling is basically built to find broader themes in a large corpus (Blei, 2012). The works of Liu et al. (2019) and Walter and Ophir (2019), however, seem to be innovative in terms of their efforts to build a new or modified method to explore comparatively more nuances of frames (Nicholls and Culpepper, 2021). As state-of-the-art models, neural networks appeared promising, but appropriate training datasets need to be developed and used for that. + +Conceptualization of Frames. Though the computational methods mostly conceptualized a frame with prominent definitions (e.g., the definition of Entman, 1993), some of the methods embraced framing aspects partially. Some studies ended up operationalizing a frame in a way that is not supported by the core framing aspects. For instance, Boydstun et al. (2013, 2014) include its main aspects in developing PFC, which defined the 15 dimensions as "topics" in the name of frames. Nguyen (2015) simply equated a frame with second-level agendas or sub-topics without adequate conceptual support. Though Liu et al. (2019) and Walter and Ophir (2019) provided relatively stronger conceptualization, their results suggest that Liu et al. (2019)'s coding schema and Walter and Ophir (2019)'s network communities still lack in providing coherent definition and causal interpretation arguments. + +Interpretation of Results. Even if some studies conceptualized frames in a relatively comprehensive way, their results presentation and interpretation rarely went above describing relevant topics and themes, not frames, as their results lack illustrating the coherent problem, causal evaluations, or potential recommendations. An example mentioned under ANTMN above demonstrated such evidence. Similar gaps in terms of framing conceptualization and presentation of results and interpretations remain in other approaches as well (e.g., topic modeling and cluster analysis). + +Use of Framing Devices. The bag-of-word approach automatically excludes from analysis many potential framing devices listed in Table 1. The approaches examined in this article mostly utilize only one framing device (i.e., words). Considering the fact that framing analysis is a comprehensive approach involving multiple theoretical and practical aspects (D'angelo, 2018; Golan, 2021), even the + +qualitative framing analysis through manual labor is challenging work. From that perspective, computational approaches are in the nascent stage in addressing this social science problem of framing analysis. So, the scholarship needs better computational methods and tools that might explore frames as close as possible. For example, computational approaches might want to retrieve the problem definition and causal interpretation by including more framing devices (see Table 1) by going beyond the analysis of "words" in future studies. + +Overall, this survey article contributed to the literature on computational framing analysis in several ways. As the first survey paper, it put together existing computational framing analysis methods and resources in one place, which can benefit future scholars as at least a source of gaining more comprehensive knowledge on computational framing analysis approaches. With this knowledge, they can start further exploring frames in big data and advancing computational framing analysis methods. This article also contributed to the ongoing discussion and scholarly efforts on further improving the computational tools in framing analysis. + +Open Questions. The analysis and discussion offer at least three open questions to be discussed and addressed in future studies: a) How can a computational approach capture all relevant semantic relations, going beyond just words, for better exploration of frames, 2) How can the semantic relations in one text document be connected with or informed by that of other documents for a broader understanding of frames across multiple documents, c) Given the role of many framing devices, not only words, in constructing frames (see Table 1), how can we develop a computational model that captures salience deployed through other framing devices including sentences, omit, metaphors, size and placement of texts, culture, emotion, sources, catchphrase, exemplars, visual content, etc. + +A crucial part of framing analysis is to capture "how" a text is presented. Entman (1993)'s definition talks about "perceived reality" that also aligns with people's cognitive thoughts. In texts, the "perceived reality" is usually dissected between what is discussed and how it is framed. Though the "what" part is generally apparent, the main issue is to analyze the "how." In NLP, it appears difficult to automatically distinguish between the "what" and the "how." So, the framing analysis task in NLP is more complicated than for human analysts. + +Limitations. Selecting articles for this survey was a challenging task as the words "frame" and "framing" are used in studies of other disciplines (e.g., engineering). This prompted us to exploit multiple ways (e.g., Google Scholar and Scopus) to collect relevant articles as comprehensively as possible. Articles not matching the keyword searches might have been left out. So, the list might have some articles missing due to the search constraints. We excluded non-English articles. + +Regarding analysis, we mainly focused on methodological design and quality in terms of capturing and examining frames and framing devices. We did not focus and report on the accuracy of the models' performance. For example, we emphasized the quality of the training dataset (e.g., MFC) to explore frames, instead of the models' accuracies. As this survey article is conducted from a qualitative perspective, our results are constrained by quantitative insights (e.g., the frequency or percentage of applying particular methods in prior studies). + +# References + +Lene Aarøe. 2011. Investigating frame strength: The case of episodic and thematic frames. Political communication, 28(2):207-226. +Afra Feyza Akyurek, Lei Guo, Randa Elanwar, Prakash Ishwar, Margrit Betke, and Derry Tanti Wijaya. 2020. Multi-label and multilingual news framing analysis. In Proceedings of the 58th annual meeting of the association for computational linguistics. +Christian Baden and Paul D'Angelo. 2018. Reconstructing frames from intertextual news discourse. *Doing news framing analysis II: Empirical and theoretical perspectives*, pages 43-66. +Monika Bednarek and Georgia Carr. 2021. Computer-assisted digital text analysis for journalism and communications research: Introducing corpus linguistic techniques that do not require programming. *Media International Australia*, 181(1):131-151. +Vibhu Bhatia, Vidya Prasad Akavoor, Sejin Paik, Lei Guo, Mona Jalal, Alyssa Smith, David Assefa Tofu, Edward Edberg Halim, Yimeng Sun, Margrit Betke, et al. 2021. Openframing: Open-sourced tool for computational framing analysis of multilingual data. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 242-250. +David M Blei. 2012. Probabilistic topic models. Communications of the ACM, 55(4):77-84. + +Porismita Borah. 2008. Examining media content: A case study of newspaper coverage of dowry in india, 1999-2006. Asian Journal of Communication, 18(4):379-395. +Amber E Boydstun, Dallas Card, Justin Gross, Paul Resnick, and Noah A Smith. 2014. Tracking the development of media frames within and across policy issues. Technical report, University of California, Davis. +Amber E Boydstun, Justin H Gross, Philip Resnik, and Noah A Smith. 2013. Identifying media frames and frame dynamics within and across policy issues. In *New Directions in Analyzing Text as Data Workshop*, London. +Bjorn Burscher, Rens Vliegenthart, and Claes H de Vreese. 2016. Frames beyond words: Applying cluster and sentiment analysis to news coverage of the nuclear power issue. Social Science Computer Review, 34(5):530-545. +Pere-Lluis Huguet Cabot, Verna Dankers, David Abadi, Agneta Fischer, and Ekaterina Shutova. 2020. The pragmatics behind politics: Modelling metaphor, framing and emotion in political discourse. ACL Anthology. +Dallas Card, Amber Boydstun, Justin H Gross, Philip Resnik, and Noah A Smith. 2015. The media frames corpus: Annotations of frames across issues. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 438-444. +Paul DiMaggio, Manish Nag, and David Blei. 2013. Exploiting affinities between topic modeling and the sociological perspective on culture: Application to newspaper coverage of us government arts funding. Poetics, 41(6):570-606. +Paul D'angelo. 2018. Doing news framing analysis ii. Empirical and Theoretical Perspectives. +Robert M Entman. 1993. Framing: Towards clarification of a fractured paradigm. McQuail's reader in mass communication theory, pages 390-397. +Robert M Entman. 2003. Cascading activation: Contesting the white house's frame after 9/11. Political Communication., 20(4):415-432. +Robert M Entman, Jörg Matthes, and Lynn Pellicano. 2009. Nature, sources, and effects of news framing. In The handbook of journalism studies, pages 195-210. Routledge. +Gail Fairhurst and Robert Sarr. 1996. The art of framing. San Francisco: Jossey-Bass. +Gail T Fairhurst. 2005. Reframing the art of framing: Problems and prospects for leadership. Leadership, 1(2):165-185. + +Anjalie Field, Doron Kliger, Shuly Wintner, Jennifer Pan, Dan Jurafsky, and Yulia Tsvetkov. 2018. Framing and agenda-setting in russian news: a computational analysis of intricate political strategies. arXiv preprint arXiv:1808.09386. +William A Gamson, David Croteau, William Hoynes, and Theodore Sasson. 1992. Media images and the social construction of reality. Annual review of sociology, 18(1):373-393. +William A Gamson and Andre Modigliani. 1989. Media discourse and public opinion on nuclear power: A constructionist approach. American journal of sociology, 95(1):1-37. +Salma Ghanem. 1997. Filling in the tapestry: The second level of agenda setting in me mcombs, dl shaw & dh weaver (eds.), communication and democracy (pp. 3-15). +Fabrizio Gilardi, Charles R Shipan, and Bruno Wuest. 2021. Policy diffusion: The issue-definition stage. American Journal of Political Science, 65(1):21-35. +Guy Golan. 2021. What is news framing? an informal conversation among framing scholars. https://www.youtube.com/watch?v=mArApGS-p1I&t=57s. +Lei Guo, Chao Su, Sejin Paik, Vibhu Bhatia, Vidya Prasad Akavoor, Ge Gao, Margrit Betke, and Derry Wijaya. 2022. Proposing an open-sourced tool for computational framing analysis of multilingual data. Digital Journalism, pages 1-22. +James K Hertog and Douglas M McLeod. 2001. A multiperspectival approach to framing analysis: A field guide. In Framing public life, pages 157-178. Routledge. +Shanto Iyengar. 1994. Is anyone responsible?: How television frames political issues. University of Chicago Press. +Yangfeng Ji and Noah Smith. 2017. Neural discourse structure for text categorization. arXiv preprint arXiv:1702.01829. +Elise Jing and Yong-Yeol Ahn. 2021. Characterizing partisan political narrative frameworks about COVID-19 on twitter. *EPJ data science*, 10(1):53. +Kristen Johnson, Di Jin, and Dan Goldwasser. 2017. Leveraging behavioral and social information for weakly supervised collective classification of political discourse on twitter. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 741-752. +Yowei Kang and Kenneth CC Yang. 2022. Communicating racism and xenophobia in the era of donald trump: A computational framing analysis of the us-mexico cross-border wall discourses: Special issue on donald trump era and communicating race in america. *Howard Journal of Communications*, pages 1-20. + +Shima Khanehzar, Andrew Turpin, and Gosia Mikolajczak. 2019. Modeling political framing across policy issues and contexts. In Proceedings of the The 17th Annual Workshop of the Australasian Language Technology Association, pages 61-66. +Sotiris B Kotsiantis, Ioannis Zaharakis, P Pintelas, et al. 2007. Supervised machine learning: A review of classification techniques. Emerging artificial intelligence applications in computer engineering, 160(1):3-24. +Haewoon Kwak, Jisun An, and Yong-Yeol Ahn. 2020. A systematic media frame analysis of 1.5 million new york times articles from 2000 to 2017. In 12th ACM Conference on Web Science, pages 305-314. +Haewoon Kwak, Jisun An, Elise Jing, and Yong-Yeol Ahn. 2021. Frameaxis: characterizing microframe bias and intensity with word embedding. PeerJ Computer Science, 7:e644. +Pengxiang Li, Hichang Cho, Yuren Qin, and Anfan Chen. 2021. # metoo as a connective movement: Examining the frames adopted in the anti-sexual harassment movement in china. Social Science Computer Review, 39(5):1030-1049. +Siyi Liu, Lei Guo, Kate Mays, Margrit Betke, and Derry Tanti Wijaya. 2019. Detecting frames in news headlines and its application to analyzing news framing trends surrounding us gun violence. In Proceedings of the 23rd conference on computational natural language learning (CoNLL). +Maxwell McCombs, Juan Pablo Llamas, Esteban Lopez-Escobar, and Federico Rey. 1997. Candidate images in spanish elections: Second-level agenda-setting effects. Journalism & Mass Communication Quarterly, 74(4):703-717. +Julia Mendelsohn, Ceren Budak, and David Jurgens. 2021. Modeling framing in immigration discourse on social media. arXiv preprint arXiv:2104.06443. +Nona Naderi and Graeme Hirst. 2017. Classifying frames at the sentence level in news articles. *Policy*, 9:4-233. +Thomas E Nelson, Rosalee A Clawson, and Zoe M Oxley. 1997. Media framing of a civil liberties conflict and its effect on tolerance. American Political Science Review, 91(3):567-583. +Viet-An Nguyen. 2015. Guided probabilistic topic models for agenda-setting and framing. Ph.D. thesis, University of Maryland, College Park. +Viet-An Nguyen, Jordan Boyd-Graber, Philip Resnik, and Kristina Miler. 2015. Tea party in the house: A hierarchical ideal point topic model and its application to republican legislators in the 112th congress. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1438-1448. + +Tom Nicholls and Pepper D Culpepper. 2021. Computational identification of media frames: Strengths, weaknesses, and opportunities. Political Communication, 38(1-2):159-181. +Yotam Ophir, Dror Walter, Daniel Arnon, Ayse Lokmanoglu, Michele Tizzoni, Joëlle Carota, LORENZO D'Antiga, and Emanuele Nicastro. 2021. The framing of Covid-19 in Italian media and its relationship with community mobility: a mixed-method approach. Journal of health communication, 26(3):161-173. +Stephen D Reese, Oscar H Gandy Jr, and August E Grant. 2001. Framing public life: Perspectives on media and our understanding of the social world. Routledge. +Margaret E Roberts, Brandon M Stewart, Dustin Tingley, Christopher Lucas, Jetson Leder-Luis, Shana Kushner Gadarian, Bethany Albertson, and David G Rand. 2014. Structural topic models for open-ended survey responses. American journal of political science, 58(4):1064-1082. +Malik Sallam. 2021. Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines, 9(2):160. +Lisa Sanderink. 2020. Shattered frames in global energy governance: Exploring fragmented interpretations among renewable energy institutions. Energy research & social science, 61:101355. +Antonio Sanfilippo, Lyndsey Franklin, Stephen Tratz, Gary Danielson, Nicholas Mileson, Roderick Riensche, and Liam McGrath. 2008. Automating frame analysis. In Social computing, behavioral modeling, and prediction, pages 239-248. Springer. +Mihai D Sturdza et al. 2018. Automated framing analysis: A rule based system for news media text. Journal of Media Research-Revista de Studii Media, 11(32):94-110. +Geoffrey Supran and Naomi Oreskes. 2021. Rhetoric and frame analysis of exonmobil's climate change communications. *One Earth*, 4(5):696-719. +J Swenson. 1990. News coverage of the abortion issue. framing changes in the 1980s. paper presented to the committee on the status of women. Association for Education in Journalism and Mass Communication. +James W Tankard Jr. 2001. The empirical approach to the study of media framing. In Framing public life, pages 111-121. Routledge. +David Tewksbury and Julius Matthew Riles. 2018. Framing in an interactive news environment. Doing news framing analysis II. Empirical and theoretical perspectives, pages 137-162. +Isidora Tourni, Lei Guo, Taufiq Husada Daryanto, Fabian Zhafransyah, Edward Edberg Halim, Mona Jalal, Boqi Chen, Sha Lai, Hengchang Hu, Margrit + +Betke, et al. 2021. Detecting frames in news headlines and lead images in us gun violence coverage. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 4037-4050. +Wouter van Atteveldt and Tai-Quan Peng. 2018. When communication meets computation: Opportunities, challenges, and pitfalls in computational communication science. Communication Methods and Measures, 12(2-3):81-92. +TA Van Dijk. 2016. Analyzing frame analysis: A critical review of framing studies in social movement research. Technical report, Working paper version 4.0, 2 December. https://www.academia.edu/40286423.... +Dror Walter and Yotam Ophir. 2019. News frame analysis: An inductive mixed-method computational approach. Communication Methods and Measures, 13(4):248-266. +Dror Walter and Yotam Ophir. 2021. Strategy framing in news coverage and electoral success: An analysis of topic model networks approach. Political Communication, 38(6):707-730. +Kenneth CC Yang and Yowei Kang. 2020. Framing national security concerns in mobile telecommunication infrastructure debates: A text mining study of huawei. In Huawei goes global, pages 319-339. Springer. +Tuukka Ylä-Anttila, Veikko Eranti, and Anna Kukkonen. 2021. Topic modeling for frame analysis: A study of media debates on climate change in india and usa. Global Media and Communication, page 17427665211023984. +Qi Yu. 2022. "again, dozens of refugees drowned": A computational study of political framing evoked by presuppositions. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop, pages 31-43. +Caleb Ziems and Diyi Yang. 2021. To protect and to serve? analyzing entity-centric framing of police violence. arXiv preprint arXiv:2109.05325. + +# A Appendix + +
DevicesSourcesDevicesSources
Content (Texts)Content (Visual)
1. WordsEntman (1993)25. MetaphorsFairhurst and Sarr (1996); +Gamson and Modigliani (1989); +Tankard Jr (2001)
2. Stock phrasesEntman (1993)26. Visual images +(e.g., picture, icon)Tankard Jr (2001); +Gamson and Modigliani (1989)
3. Stereotyped imageEntman (1993)27. Chart & graphTankard Jr (2001)
4. Sources of infoEntman (1993)
5. SentencesEntman (1993)Action
6. MetaphorsFairhurst and Sarr (1996); +Gamson and Modigliani (1989)28. Placement +(e.g., front page)Entman (1993); +Swenson (1990)
7. Jargon/catchphraseFairhurst and Sarr (1996); +Gamson and Modigliani (1989),29. RepetitionEntman (1993)
8. ContrastFairhurst and Sarr (1996)30. Associating +with culturally +familiar symbolsEntman (1993)
9. SpinFairhurst and Sarr (1996)31. IncludeEntman (1993)
10. StoriesFairhurst and Sarr (1996)32. Omit or hideEntman (1993)
11. Headlines +& subheadlines).Tankard Jr (2001)33. Show root +causesGamson and Modigliani (1989)
12. SubheadsTankard Jr (2001)34. Show effectsGamson and Modigliani (1989)
13. Photo captionsTankard Jr (2001)35. Make appeals +to principles +(moral claims)Gamson and Modigliani (1989)
14. LeadsTankard Jr (2001)
15. Selection of +sourcesTankard Jr (2001)Context
16. Selection of quoteTankard Jr (2001)36. Contextual +informationBaden and D’Angelo (2018)
17. Blown up quotesTankard Jr (2001)37. CultureEntman (1993)
18. Series’ logosTankard Jr (2001)
19. StatisticsTankard Jr (2001)Communicator
20. Concluding +statementsTankard Jr (2001)38. ThoughtFairhurst and Sarr (1996)
21. ExemplarsGamson and Modigliani (1989)39. ForethoughtFairhurst and Sarr (1996)
22. DepictionsGamson and Modigliani (1989)40. Being biasFairhurst and Sarr (1996)
23. EmotionAarøe (2011)
24. HashtagBorah (2008)
+ +Table 1: Framing Devices Used to Construct Frame(s) + +
CitationTypeDomainMethod/ Annotated corpora usedResource
1. Boydstun et al. (2013)Corpus, methodTobacco, immigrant, same sex-marriageRegression, Policy frames codebook (PFC)N/A
2. DiMaggio et al. (2013)ApplicationArtists & artsTopic ModelingN/A
3. Boydstun et al. (2014)[1][1][1]N/A
4. Card et al. (2015)Method[1]Media frames corpus (MFC)GitHub
5. Nguyen (2015)MethodCongressional debates, reviewsHierarchical topic modelingGitHub
6. Nguyen et al. (2015)MethodCongress speech[5][5]
7. Burscher et al. (2016)ApplicationNuclear powerCluster analysisN/A
8. Ji and Smith (2017)ApplicationImmigrationNeural network, semantic RelationsGitHub
9. Johnson et al. (2017)ApplicationAbortion, affordable care act[8]GitHub
10. Naderi and Hirst (2017)ApplicationImmigration, smoking[8]N/A
11. Field et al. (2018)ApplicationU.S. coverage in Russian newspaper[4]N/A
12. Sturdz et al. (2018)MethodN/AOperationalization of semantic relationsN/A
13. Khanehzar et al. (2019)ApplicationImmigration, same-sex marriage[8]N/A
14. Liu et al. (2019)Method, annotated corpusGun violenceGun violence frame corpus (GVFC), Neural networkGitHub
15. Walter and Ophir (2019)MethodSenate coverage, epidemicsTopic modeling, Network analysisGitHub
16. Akyurek et al. (2020)Application & extension[14][14]GitHub, GitHub2
17. Cabot et al. (2020)ApplicationImmigration, smoking[8]GitHub
18. Kwak et al. (2020)ApplicationFake news[4]GitHub
19. Sanderink (2020)ApplicationRenewable energyFrequency and co-occurrence modelPrograms
20. Yang and Kang (2020)ApplicationTelecom[19]N/A
21. Bednarek and Carr (2021)Method Application & extension of open-source toolLifestyle[19]WordsSmith
22. Bhatia et al. (2021)Gun violence[14][14]
23. Gilardi et al. (2021)ApplicationGovt policyStructured topic modelingAppendix
24. Jing and Ahn (2021)ApplicationPolitical polarizationFrameAxisN/A
25. Kwak et al. (2021)MethodReviews[24]N/A
26. Li et al. (2021)Application#MeToo movement[2]N/A
27. Mendelsohn et al. (2021)ApplicationImmigration[8]GitHub
28. Nicholls and Culpepper (2021)ComparativeBankingN/AN/A
29. Ophir et al. (2021)ApplicationCOVID-19[15]N/A
30. Supran and Oreskes (2021)Applicationgun violence, oil and gas[2]N/A
31. Tourni et al. (2021)Application & extensionGun violence[14][14]
32. Walter and Ophir (2021)Application[15][15][15]
33. Ylä-Anttila et al. (2021)ApplicationClimate change[2]N/A
34. Ziems and Yang (2021)MethodPolice violenceSemantic relationsGitHub
35. Guo et al. (2022)[22]Gun violence[14][14]
36. Yu (2022)MethodRefugee crisis[34]GitHub
37. Kang and Yang (2022)ApplicationRacism, Xenophobia[19][19]
+ +Table 2: Summary of the Methods and Resources \ No newline at end of file diff --git a/asurveyofcomputationalframinganalysisapproaches/images.zip b/asurveyofcomputationalframinganalysisapproaches/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..eb6bc31e78b48700dd04e7f68fe533ceaaa4ebc9 --- /dev/null +++ b/asurveyofcomputationalframinganalysisapproaches/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:528c774aff4dbdde253efad54d2b54914d8c42da31ff3eaa5e7f78df175f9840 +size 532162 diff --git a/asurveyofcomputationalframinganalysisapproaches/layout.json b/asurveyofcomputationalframinganalysisapproaches/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..4388332f1bd878cebeb36f67753d93f382cecb57 --- /dev/null +++ b/asurveyofcomputationalframinganalysisapproaches/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:69ec22589d27294ae1f9bd99c79f9283b2a0958daf2f57fb4db7661c65a7df8a +size 328813 diff --git a/asystematicinvestigationofcommonsenseknowledgeinlargelanguagemodels/2f0a08d4-5dc6-4865-b524-5cacb7cdbb0d_content_list.json b/asystematicinvestigationofcommonsenseknowledgeinlargelanguagemodels/2f0a08d4-5dc6-4865-b524-5cacb7cdbb0d_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..4206c54905727e253dfcb6b44c8200565fc0f8a5 --- /dev/null +++ b/asystematicinvestigationofcommonsenseknowledgeinlargelanguagemodels/2f0a08d4-5dc6-4865-b524-5cacb7cdbb0d_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9969192c72fd39977df8167adf99630e64acc0b3dc3ee9961663eccb9c2549d5 +size 112144 diff --git a/asystematicinvestigationofcommonsenseknowledgeinlargelanguagemodels/2f0a08d4-5dc6-4865-b524-5cacb7cdbb0d_model.json b/asystematicinvestigationofcommonsenseknowledgeinlargelanguagemodels/2f0a08d4-5dc6-4865-b524-5cacb7cdbb0d_model.json new file mode 100644 index 0000000000000000000000000000000000000000..6d7dacb367ac281a55df6171e20fa343fb1fc8ae --- /dev/null +++ b/asystematicinvestigationofcommonsenseknowledgeinlargelanguagemodels/2f0a08d4-5dc6-4865-b524-5cacb7cdbb0d_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d744630369966f35880a54151744ca9c87e5978b2fc5ba4ede17bb2709a58423 +size 134515 diff --git a/asystematicinvestigationofcommonsenseknowledgeinlargelanguagemodels/2f0a08d4-5dc6-4865-b524-5cacb7cdbb0d_origin.pdf b/asystematicinvestigationofcommonsenseknowledgeinlargelanguagemodels/2f0a08d4-5dc6-4865-b524-5cacb7cdbb0d_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..adaa0b73180b246f2b6c889c95ef4bee14e8a8cb --- /dev/null +++ b/asystematicinvestigationofcommonsenseknowledgeinlargelanguagemodels/2f0a08d4-5dc6-4865-b524-5cacb7cdbb0d_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:04ec9d2506d48e3fe8267a08d19d9f6901fa4b37d6e1d51e18b934474afb4675 +size 2112264 diff --git a/asystematicinvestigationofcommonsenseknowledgeinlargelanguagemodels/full.md b/asystematicinvestigationofcommonsenseknowledgeinlargelanguagemodels/full.md new file mode 100644 index 0000000000000000000000000000000000000000..237427655bd1cc82a6173da7e26b79f2b845561c --- /dev/null +++ b/asystematicinvestigationofcommonsenseknowledgeinlargelanguagemodels/full.md @@ -0,0 +1,440 @@ +# A Systematic Investigation of Commonsense Knowledge in Large Language Models + +Xiang Lorraine Li† * Adhiguna Kuncoro‡ Jordan Hoffmann* Cyprien de Masson d'Autume♦Phil Blunsom▲♣ Aida Nematzadeh‡ + +† Allen Institute for Artificial Intelligence DeepMind + +$\star$ Inflection AI + +$\spadesuit$ Reka + +$\triangle$ Cohere + +$\spadesuit$ University of Oxford + +lorraine1@allenai.org nematzadeh@google.com + +# Abstract + +Language models (LMs) trained on large amounts of data (e.g., Brown et al., 2020; Patwary et al., 2021) have shown impressive performance on many NLP tasks under the zero-shot and few-shot setup. Here we aim to better understand the extent to which such models learn commonsense knowledge — a critical component of many NLP applications. We conduct a systematic and rigorous zero-shot and few-shot commonsense evaluation of large pretrained LMs, where we: (i) carefully control for the LMs' ability to exploit potential surface cues and annotation artefacts, and (ii) account for variations in performance that arise from factors that are not related to commonsense knowledge. Our findings highlight the limitations of pre-trained LMs in acquiring commonsense knowledge without task-specific supervision; furthermore, using larger models or few-shot evaluation are insufficient to achieve human-level commonsense performance. + +# 1 Introduction + +Common sense — the implicit knowledge about everyday situation that is shared by humans — is an important prerequisite for developing general-purpose intelligent systems (McCarthy et al., 1960; Liu and Singh, 2004; Gunning, 2018). Intriguingly, recent large language models (LMs, Brown et al., 2020; Patwary et al., 2021; Rae et al., 2021) have achieved remarkable performance at various common sense benchmarks (e.g., Sakaguchi et al., 2020; Zellers et al., 2019a; Bisk et al., 2020b; Sap et al., 2019b), even when they are evaluated in a zero-shot or few-shot fashion, without explicit commonsense supervision. We revisit this apparent success, and conduct a rigorous study to better understand the extent to which such pre-trained LMs are able to capture commonsense knowledge. + +Question: Tracy took Jesse's students on a field trip and covered the expenses for everyone. How would you describe Tracy? + +Answer: A. giving B. selfish C. very generous + +Answer-only: very generous. + +Zero-shot: Tracy took Jesse's students on a field trip and covered the expenses for everyone. Tracy is very generous. + +Few-shot: Allen pushed Kitty into the elevator, Kitty is angry. $\backslash n$ Tracy took Jesse's students on a field trip and covered the expenses for everyone. Tracy is very generous. + +Figure 1: The experiment settings with their corresponding input to the LM. The example is taken from Social IQa (Sap et al., 2019b) where we convert questions to natural text using the rules of Shwartz et al. (2020); this conversion yields to better performance (\$5). + +In this work, we focus on zero- and few-shot evaluations of pre-trained LMs without commonsense-specific fine-tuning for two reasons: First, we aim to examine if a pre-trained LM is able to acquire general commonsense knowledge. As pre-trained LMs constitute a foundational building block of NLP today, any deficiencies in their commonsense understanding can thus adversely manifest in downstream applications (Bommasani et al., 2021). Fine-tuning the LM would make it hard to disentangle how much of the commonsense knowledge is acquired by the underlying LM, as opposed to the task-specific supervision from a benchmark (Yogatama et al., 2019). Second, human-annotated commonsense datasets are expensive to collect due to the vast, diverse, and growing nature of commonsense knowledge (Elazar et al., 2021). + +Concretely, our work differs from prior work on commonsense evaluation of LMs (Brown et al., 2020; Patwary et al., 2021) by way of a more rigorous evaluation, in which we: (i) carefully control for the LM's ability to exploit potential surface cues and annotation artefacts to predict the answer, without reasoning over the context. We further (ii) account for the variations in factors influencing the LM's performance, which arise from certain evaluation design choices — independently of common- + +sense knowledge in the models. We systematically conduct this study on four commonsense benchmarks, six model sizes (up to a very large LM with 280B parameters), and multiple evaluation settings (e.g., different score functions and prompt format). + +We begin with our first question: When evaluating a large LM in a zero-shot setting, how does its zero-shot performance compare to a strong baseline ( $\S 3$ )? Controlling for the LM's ability to guess the correct answer, without even looking at the question (Poliak et al., 2018; Trichelair et al., 2019, Answer-only baseline, top of Fig. 1), we find that, despite the LM's strong zero-shot performance, the Answer-only baseline can nevertheless perform surprisingly well on some benchmarks. Despite the clear importance of comparing with answer-only baselines as shown in Figure 2, these comparisons are absent from recent work on large LMs (Zhou et al., 2020; Brown et al., 2020; Rae et al., 2021). Furthermore, increasing model size alone is unlikely to bridge the gap with human performance in the near future: Our analysis of scaling behavior suggests that much larger dense LMs (with 100T to $10^{18}$ parameters — which are infeasibly large at present) are needed to achieve human performance for 3 out of 4 benchmarks. + +Does familiarizing the LM with the task format using a few-shot evaluation setting substantially improve performance (§4)? We find that the few-shot evaluation (using up to 64 examples) does not substantially improve the LMs' performance for most tasks except Social IQa. Moreover, using the few-shot/in-context demonstration setting fails to bridge the gap between the LM and current SOTA. + +Finally, we ask: to what extent does the model's zero-shot performance vary depending on certain evaluation design choices, such as the format of the prompt or the score function ( $\S 5$ )? We find that these design choices — though they have little to do with common sense — can result in large fluctuations in performance (up to $19\%$ ). This finding challenges the notion that large LMs are largely able to work well out-of-the-box with minimal task-specific tuning. Based on these findings, we emphasize the need to carefully select such design choices, explicitly state them to enable fair comparison with prior work, and quantify the robustness of the observed results across different design choices. + +All in all, our findings suggest that acquiring human-level commonsense knowledge, without relying on surface cues or task-specific supervision, + +
ChoicesKnowledge TypesQuestions
HellaSwag (Zellers et al., 2019a)4Temporal, Physical10042
WinoGrande (Sakaguchi et al., 2020)2Social, Physical1267
Social IQa (Sap et al., 2019b)3Social1954
PIQA (Bisk et al., 2020b)2Physical1838
+ +Table 1: Benchmark Statistics. Choices: the number of candidate answers for each question; Questions: the number of candidate answers for each question. + +remains beyond the reach of current large LMs. Given the marginal improvements from increasing model size, we conjecture that other techniques, such as explicit commonsense supervision, multimodal grounding, or physical embodiment (Bisk et al., 2020a), are promising ways forward. + +# 2 Experimental Setting + +We begin by outlining our experimental setup, and describe the benchmarks, model, baselines, and other relevant experimental settings. + +# 2.1 Commonsense Benchmarks + +Commonsense knowledge spans many categories, such as physical common sense (e.g., a car is heavier than an apple), social common sense (e.g., a person will feel happy after receiving gifts), and temporal common sense (e.g., cooking an egg takes less time than baking a cake). Given this diverse nature of commonsense knowledge, various benchmarks have been proposed to test these different types of knowledge (e.g., Zellers et al., 2019a; Sakaguchi et al., 2020; Sap et al., 2019b; Bisk et al., 2020b; Lin et al., 2020; Boratko et al., 2020). + +Commonsense benchmarks broadly consist of two tasks: (a) multiple-choice evaluation (Zellers et al., 2018, 2019a; Sap et al., 2019b; Bisk et al., 2020b), where a model needs to choose the correct answer from a list of plausible answers; (b) generative evaluation (Boratko et al., 2020; Lin et al., 2020, 2021), which requires a model to generate an answer given a question and some additional context. Here we focus on multiple-choice benchmarks, since they provide a more reliable automatic metric (i.e., accuracy), whereas automated metrics used to evaluate language generation (e.g., BLEU, Papineni et al., 2002) do not correlate perfectly with human judgment (Liu et al., 2016; Novikova et al., 2017).1 We use a diverse set of four representative multiple-choice commonsense benchmarks + +to better understand the extent to which pre-trained LMs are able to acquire different types of commonsense knowledge. We use the validation split of each benchmark, as their test splits are not public. + +HellaSwag (Zellers et al., 2019a) is designed to evaluate a model's ability to understand physical, grounded, and temporal common sense. Given a four-sentence story, the model must choose the correct ending from four candidates. The stories are either video captions from AcitivityNet (Heilbron et al., 2015), or WikiHow passages (Koupaee and Wang, 2018). When evaluating LMs on a similar dataset (Zellers et al., 2018), incorrect answers can be easy to distinguish from correct ones; hence in constructing HellaSwag, Zellers et al. (2019a) removed easy negatives through adversarial filtering. + +WinoGrande (Sakaguchi et al., 2020) is a coreference resolution benchmark that mainly examines physical and social common sense. Each example consists of a sentence (e.g., "The trophy did not fit the suitcase because it is too big.") and two candidate entities (e.g., "trophy" or "suitcase"). The task is to choose the correct entity for the pronoun, e.g., "it" refers to "trophy" in the example. + +Social IQa (Sap et al., 2019b) focuses on evaluating social commonsense, in particular theory of mind — the capacity to reason about others' mental states (Flavell, 2004). Given context sentences and a corresponding question, the task is to choose the correct response from three candidates. Annotators use the ATOMIC knowledge base (Sap et al., 2019a) to create context sentence and questions; the answers are provided by additional annotators. + +PIQA (Bisk et al., 2020b), short for physical interaction question answering, mainly covers the physical aspect of common sense. Each data point consists of a task and two alternative solutions to finish the task; one of which is correct. The tasks are curated from a website2 with instructions for everyday tasks (e.g., separating egg yolks from eggs); the solutions are provided by human annotators. + +# 2.2 Pre-trained Language Model + +We use the pre-trained language model of Rae et al. (2021), Gopher, which is an autoregressive Transformer (Vaswani et al., 2017) language model with 280 billion parameters. We choose Gopher because of its excellent zero-shot and few-shot performance at various benchmarks, in addition to its large model size, which has been shown to improve + +language modeling and downstream performance (Kaplan et al., 2020). Notably, Gopher is more than $50\%$ larger than GPT3 and as of March 2022, is one of the largest dense LMs developed to date. + +Gopher hyper-parameters. The pre-trained Gopher language model has 80 layers, 128 attention heads, 128-dimensional key/value vectors, and a feedforward layer dimension of 16,384. To better understand the effect of different model sizes (§3.2), we experiment with five other model sizes: 44M, 117M, 417M, 1.4B, and 7.1B. Similar to Gopher, each of these models was pre-trained by Rae et al. (2021); a full list of model hyper-parameters is summarized in Table 1 of Rae et al. (2021). Each model is trained by subsampling from the MassiveText dataset, which consists of more than 2 trillion tokens from various domains including web pages, news, books, and codes (Rae et al., 2021). The authors have removed documents that overlap significantly with the evaluation sets from training set including benchmarks used in our work. We use TPUv3 to conduct all evaluations, with an estimated total compute budget of $2 \times 10^{20}$ FLOPs. + +Score function. On the multiple-choice benchmarks, we evaluate the pre-trained LM by calculating the score for each answer choice under the model, and select the highest-scoring answer $\hat{\mathbf{y}}$ : + +$$ +\hat{\mathbf{y}} = \operatorname *{arg max}_{\mathbf{y}\in Y(\mathbf{x})}s_{\boldsymbol{\theta}}(\mathbf{y}|\mathbf{x}); +$$ + +here $\mathbf{x}$ denotes the question or prompt, $Y(\mathbf{x})$ the set of answer choices for a given question, and $s_{\theta}(\cdot)$ the score of an answer choice $\mathbf{y}$ given $\mathbf{x}$ , under the pre-trained LM with parameters $\theta$ . We provide some examples in Table 2. For Social IQa, we convert questions to natural text using the rules of Shwartz et al. (2020); we find this natural text format to yield better results, as discussed in §5. + +Unless otherwise stated, we use cross-entropy (or token-level log prob) to score each answer: + +$$ +s _ {\boldsymbol {\theta}} (\mathbf {y} | \mathbf {x}) = \frac {\sum_ {i = 0} ^ {\| \mathbf {y} \|} \log \left(p _ {\boldsymbol {\theta}} \left(y _ {i} \mid x , y _ {0} \dots y _ {i - 1}\right)\right)}{\| \mathbf {y} \|}. \tag {1} +$$ + +This score function reduces the impact of length; without dividing by $\| \mathbf{y} \|$ , longer answers might have lower probabilities (Stahlberg and Byrne, 2019). GPT3 (Brown et al., 2020) also employs this score function for zero-shot evaluation. + +
DatasetPrompt: xAnswer: y
HellaSwagA woman is outside with a bucket and a dog. The dog is running around trying to avoid a bath. Shegets the dog wet, then it runs away again.
WinoGrandeThe GPS and map helped me navigate home. I got lost when theGPS got turned off.
Social IQaJordan was in charge of taking the food on the camping trip and left all the food at home. Jordan felthorrible that he let his friends down on the camping trip.
PIQAMake Halloween lanterns.Draw ghost faces on empty milk bottles, put a candle in each one.
+ +Table 2: Examples of the prompt $x$ and the correct answer $y$ in different benchmarks. + +# 2.3 Baselines + +We compare the performance of Gopher with two baselines. The first, simple baseline is to randomly select an answer candidate, where the chance of selecting the correct one is $\frac{1}{\text { number of choices }}$ . We henceforth refer to this as the Random Baseline. We experiment with two other baselines: Either choosing the majority label from the training data, or choosing the longest answer. We omit these baselines as they perform similarly to the Random Baseline. + +More importantly, we consider an Answer-only Baseline, where we select the highest-scoring answer choice under the LM, without conditioning on the question. More formally, this baseline considers $s_{\theta}(\mathbf{y})$ , as opposed to $s_{\theta}(\mathbf{y}|\mathbf{x})$ in Eq. 1. This baseline reveals the extent to which the pre-trained LM conducts the appropriate reasoning over the context to select the answer, as opposed to relying on potential surface cues or annotation artefacts that make the correct answer a priori more probable than the rest. We illustrate this baseline at the top of Fig. 1. For WinoGrande, we calculate the cross-entropy of the text starting by the pronoun replacement, as shown in Table 2. Ideally, each answer choice should be equally likely if we do not consider the question, and the Answer-only performance should be close to the Random baseline. Similar hypothesis-only baselines are well-studied for natural language inference datasets (Poliak et al., 2018); Trichelair et al. (2019) further explored such an Answer-only baseline, albeit only on the SWAG benchmark (Zellers et al., 2018). + +# 3 Zero-shot Performance + +In Fig. 2, we report the zero-shot performance of our pre-trained LM (with 280B parameters, §2.2) on the four commonsense benchmarks, alongside: (i) the Random and Answer-only baselines, and (ii) the current state-of-the-art (SOTA) result. The SOTA results are achieved by the UNI + +CORN (Lourie et al., 2021) model with 11B parameters, which is pre-trained on 6 existing commonsense datasets (Zellers et al., 2019a; Bisk et al., 2020b; Sap et al., 2019b; Sakaguchi et al., 2020; Bhagavatula et al., 2020; Huang et al., 2019). + +Zero-shot performance. At first glance, we observe strong zero-shot results, outperforming the Random Baseline in all benchmarks (compare "Rand" and "ZS" in Fig. 2). However, the gap between the stronger Answer-only baseline and the zero-shot result is smaller for all benchmarks (compare "Answer" and "ZS"): Whereas this gap is still sizable for HellaSwag and WinoGrande $(>20)$ , it is much smaller for Social IQa and PIQA. Finally, in all cases, there is still a large gap between the SOTA and zero-shot performance $(>10)$ ; this gap is largest for WinoGrande and Social IQa, suggesting that social and physical commonsense is challenging for pre-trained LMs — even a large one with 280B parameters — without task-specific supervision. $^4$ + +# 3.1 Answer-only bias + +As shown in Fig. 3, the performance gap between the Random and Answer-only baselines is notably large for HellaSwag and PIQA, where the Answer-only baseline outperforms the Random baseline by more than $32\%$ and $23\%$ , respectively. This large gap highlights an existing answer-only bias in these benchmarks: the correct answer can, in fact, be selected by the LM without conducting the appropriate commonsense reasoning over the provided context. On the other hand, the Answer-only baseline performs similarly to the random baseline on WinoGrande and Social IQa; hence, the zero-shot performance on these benchmarks is a more reliable estimate of the model's acquisition of + +![](images/97dd8146d2062d57c2e004f372a95a1d45279330228ff67f4b073279434c346c.jpg) +Figure 2: Random Baseline (Rand), Answer-only Baseline (Answer), zero-shot (ZS), and the current state-of-the-art (SOTA) for each benchmark, which is achieved by UNICORN (Lourie et al., 2021). + +![](images/282940d4c85401b01fd08268bc519bf0831abf2a16f9b37d45dfbb3d4e5bdd05.jpg) +Figure 3: The performance gap between Answer-only and Random baselines for each benchmark. + +commonsense knowledge. Given the existing (and sometimes inevitable) answer-only biases in some benchmarks, it is important to contextualize the zero-shot results by comparing with strong baselines, although such comparisons are missing from recent work (e.g., Zhou et al., 2020; Brown et al., 2020; Rae et al., 2021). + +# 3.2 Does Increasing Model Size Help? + +Gopher (the largest LM we have access to) achieves a decent zero-shot performance for most commonsense benchmarks, but maintains a notable gap with fine-tuned SOTA results. Can we eventually reach human-level performance on these commonsense benchmarks by increasing model size alone? + +Since we do not have access to larger language models than Gopher, we examine the extent to which zero-shot performance improves when using Gopher compared to a range of smaller models (i.e., scaling plots). Such scaling plot can help us predict the performance for larger models than Gopher. To that end, we use 6 pre-trained model sizes from 44M to 280B parameters (see §2.2). We present the findings in Table 3. On all four + +
AnswerZSFS(1)FS(10)FS(64)
HellaSwag44M25.828.028.028.127.9
117M29.233.533.334.033.5
417M35.644.143.443.343.3
1.4B43.256.756.456.256.5
7.1B50.469.567.667.967.9
Gopher57.079.177.879.279.3
WinoGrande44M48.551.351.150.850.6
117M50.852.051.950.950.8
400M49.952.251.850.852.5
1.3B49.758.156.456.057.3
7B52.464.662.163.162.0
Gopher50.871.169.271.474.6
Social IQa44M35.542.041.240.940.9
117M36.143.742.742.142.2
400M36.045.644.545.245.3
1.3B35.846.946.448.650.5
7B36.948.148.152.954.2
Gopher36.350.250.255.357.5
PIQA44M60.262.662.162.361.3
117M62.165.564.665.165.3
400M65.970.968.870.570.1
1.3B68.474.473.374.474.6
7B70.077.475.577.678.1
Gopher73.280.579.381.481.5
+ +Table 3: Performance of all models across benchmarks under different experimental settings. Ans: Answer-only Baseline; ZS: zero-shot performance; $\mathrm{FS}(n)$ : few-shot performance where $n$ is the number of examples. + +benchmarks, the LM's zero-shot performance (Table 3, ZS column) consistently gets better as we use increasingly larger models. This finding is also consistent with that of Brown et al. (2020), who showed that larger models have better performance at HellaSwag, WinoGrande, and PIQA. But, crucially, we argue that this does not necessarily mean that larger models are better at commonsense reasoning: For HellaSwag and PIQA, the Answer-only baseline also substantially improves with model size (Table 3, Answer column). Hence, for these benchmarks, larger models are also better at exploiting potential surface cues and annotation artefacts to guess the correct answer, without reasoning over the context. To properly assess commonsense reasoning, we should focus on the performance difference between the zero-shot and the Answer-only baseline. + +![](images/ded482db24bf1f7c0d9b55086fdfc80c5c9ec0e81672dcd940d7af9623a4dc4d.jpg) +Figure 4: The difference between zero-shot performance and Answer-only baseline for different model sizes. + +![](images/81aa329b76fa4e619dc4440576d2a1607d00da7db5da4875c748ac74a2b8c8fe.jpg) + +![](images/e39cf7ad170611758421b7a6f3e373807b11ad36c2c95c2e9469adffe1cd2056.jpg) + +We plot this performance difference with respect to different model sizes in Fig. 4. We observe that larger models have better performance across benchmarks — when increasing model size, the zero-shot performance gains are more than the performance gains of the Answer-only baseline. Nevertheless, the magnitude of this improvement varies depending on the benchmark: We see a substantial improvement on WinoGrande, but smaller improvements on HellaSwag, Social IQa and PIQA. + +Scaling behavior. Based on these trends, what model size would be required to achieve human-level performance on these benchmarks? Through a linear regression analysis (see Appendix B for more details), given the current rate of improvement in performance when gradually increasing the model size from 44M up to 280B, we need a model of at least 1.4T parameters to achieve human performance on HellaSwag, and a model of $>100\mathrm{T}$ parameters ( $\sim$ 400x larger than Gopher) for other benchmarks. This result suggests that training everlarger models may not help us reach human performance, at least in the near future. Indeed, given the enormous compute costs for training even larger LMs than the Gopher model with 280B parameters, we conjecture that there are more efficient ways of acquiring commonsense knowledge in an unsupervised fashion, for instance through multi-modal learning and grounding (Bisk et al., 2020a). + +# 4 Few-shot Performance + +Recent work has shown that large LMs can perform surprisingly well at various tasks in a few-shot fashion (Brown et al., 2020; Patwary et al., 2021). Under this setup, the model is provided with $n$ examples of the downstream task, which are then appended to the prefix. Concretely, for the four commonsense benchmarks, we append $n$ examples that include the question and the correct answer; these examples — which are randomly + +sampled from the training split of each benchmark — appear before the evaluated question, as shown in Fig. 1. This few-shot formulation is appealing as it relies only on a small number of task-specific examples to get the LM accustomed to the task, without any fine-tuning. To what extent can we improve the model performance on commonsense benchmarks, by shifting from the zero-shot to the few-shot evaluation protocol?6 + +In Fig. 5, we compare the performance of Gopher under different evaluation protocols: (i) zero-shot and (ii) few-shot $(n)$ where we use $n\in$ $\{1,10,64\}$ examples. We run the few-shot experiments between 5 and 10 times - sampling different examples each time - and report the average performance. The variance across runs is very small and is shown as the error bar in Fig. 5.7 Interestingly, model performance with few-shot (1) is sometimes worse than the zero-shot model, but the few-shot (10) and (64) models outperform their zero-shot counterpart (albeit sometimes by small margins). On HellaSwag and PIQA, we do not observe substantial improvement from few-shot evaluation compared to the zero-shot baseline (less than $2\%$ ). While few-shot evaluation does not help much for most datasets, the only exception is Social IQa, where the few-shot (64) model outperforms the zero-shot model by a $>7\%$ margin. We attribute this to the less natural text of Social IQa;9 hence adding task-specific examples provides information about what is expected of the task. + +![](images/38ff52137851c7c9f8c12d8d4220b2013e65812f8140db8c8a117d8bebed278a.jpg) +Figure 5: Accuracy on the benchmarks for zero-shot (ZS) and few-shot (FS) settings (with 1, 10, and 64 examples). We additionally report the error bars, although the error bars are not always visible due to the very small variance. + +![](images/0d525e68d6fb1e6b6142aa6813914a1dabb98cfb83c24d7c757f094d6a05edd3.jpg) + +Overall, we observe that the usefulness of the few-shot setting is benchmark dependent. Moreover, using task-specific examples in a few-shot setting does not bridge the gap to SOTA or human performance for any of the benchmarks. + +Knowledge base retrieval. We further examine if adding pre-extracted commonsense knowledge base triplets to the context — as a different form of few-shot/in-context learning — helps improve model performance. (See Appendix D for details.) In contrast to work of Shwartz and Choi (2020), we observe no improvements when appending the triplets; we attribute this discrepancy to the strong performance of our base models (see §5). + +# 5 Robustness of Reported Results + +Different evaluation design choices — such as the format of the prompt or the choice of score functions — can impact the LM's zero-shot performance, and crucially result in different conclusions about a model's commonsense understanding ability. Moreover, the lack of a standardized zero-shot LM evaluation protocol makes direct comparisons between papers difficult (Shwartz et al., 2020; Bosselut et al., 2021). To what extent can we attribute variance in the reported results to these evaluation design choices — even though they have little to do with commonsense knowledge? + +Model. Quantifying the robustness of the reported results necessitates scoring a large number of examples under different evaluation design choices, which is infeasible to do with the largest (280B-parameter) model that has a slow inference speed. Hence, we conduct the following experiments using the 7B-parameter model, which is still $\sim 5$ times larger than GPT2 (Radford et al., 2019). + +Score functions. Prior work employs different score functions to assess the plausibility of each answer choice given a question (Brown et al., 2020; Shwartz et al., 2020; Bosselut et al., 2021; Holtzman et al., 2021), which makes a direct comparison between different results challenging. Here we investigate the impact of different score functions on the reported performance. In addition to cross-entropy (defined in §2.2), we experiment with two other score functions. The first is sequence log probability, defined as the log probability of the answer choice y conditional on the question x. Letting $y_{i}$ be the $i$ -th token in the answer y: + +$$ +s (\mathbf {y} | \mathbf {x}) = \sum_ {i = 0} ^ {\| \mathbf {y} \|} \log \left(p \left(y _ {i} | \mathbf {x}, y _ {0} \dots y _ {i - 1}\right)\right) \tag {2} +$$ + +Another widely used score function (Bosselut et al., 2021; Holtzman et al., 2021) is point-wise mutual information. This score function takes into account the probability of the answer choices alone, and the probability of the answer choices conditional on the question. This metric assesses whether the question adds additional information, as commonsense reasoning should be established within the context of the question. As this score function accounts for the prior probability of answer options, it can yield lower accuracy than score functions like cross-entropy that do not account for such factor (Answer-only baseline, §2.3). + +$$ +s (\mathbf {y} | \mathbf {x}) = P M I (\mathbf {y}, \mathbf {x}) = \log \frac {p (\mathbf {y} | \mathbf {x})}{p (\mathbf {y})} \tag {3} +$$ + +Prompt format. Another important factor is the format of the prompt; here we consider a few such choices. In addition to the concatenation of the question and the answer, we experiment with adding special symbols "[Question]" and "[Answer]" to specify the question and the answer + +(Brown et al., 2020). Moreover, for Social IQa and PIQA, we experiment with a set of predefined rules (taken from Shwartz et al., 2020) to convert the questions into sentences, which are closer to the LM's pre-training data format. Finally, we find that having the correct lower/upper case and punctuation is important; thus we manually checked all benchmarks to correct for case and punctuation.[10] + +Scored text. The next option is whether to score the entire question-answer pair (Shwartz et al., 2020), or only the answer choice (conditional on the given question as prefix) as done by Brown et al. (2020) i.e., whether to calculate $s(\mathbf{x};\mathbf{y})$ or $s(\mathbf{y}|\mathbf{x})$ , where ; implies text concatenation. + +# 5.1 Do These Design Choices Matter? + +Table 4 shows the performance difference of using the worst versus the best design choices, which are independently optimized for each task. To sweep over the above design choices, instead of considering all combinations of parameters, we iterate the options in one category (e.g., score function), while fixing the parameters in the other categories.[11] + +Overall, we observe a difference between the best and worst settings on all benchmarks; this gap is especially large for HellaSwag and PIQA. This result shows that large language models do not simply work out of the box for some commonsense benchmarks, because for some tasks, these evaluation design choices can account for a large variation in model performance. We find that the score function plays the most important role — cross-entropy yields the highest accuracy values across most benchmarks, but sequence log probability achieves a slightly better performance for WinoGrande. However, when using these scores, we should account for the Answer-only baseline (§3). Moreover, converting questions to sentences makes the largest difference for Social IQa. We also find that scoring the answer conditional on the question — as opposed to scoring the concatenation of questions and answers — works best, except for WinoGrande, which has no questions. + +
WorstBestDifference
HellaSwag50.870.519.7
PIQA62.578.716.2
Social IQa43.948.54.6
WinoGrande59.762.02.3
+ +Table 4: The performance difference between the worst and best design choices for each benchmark. + +Answer-length bias. Although cross-entropy generally achieves the best reported performance, this score function is sensitive to answer lengths. As shown in Appendix C, cross-entropy tends to assign higher scores to longer answers; to varying extent, this pattern holds for PIQA, Social IQa, and WinoGrande. We attribute this to the higher probability assigned to subsequent tokens in the sequence, as such tokens have the most context and thus can be more easily predicted than tokens in the beginning of the answer. As longer answers have more such easier-to-predict tokens, their cross-entropy tends to be lower. This pattern is reversed in metrics such as sequence log probability, where shorter sequences often have higher scores (Koehn and Knowles, 2017; Stahlberg and Byrne, 2019). Note that this bias does not change the results reported in this work since there is no correlation between answer length and correctness (Appendix C). + +Takeaways. We conclude this section with three concrete recommendations for future work. + +- Although cross-entropy often achieves the best performance, it does not take into account the probability of selecting the correct answer without reasoning over the context (§3). We recommend future work to either: (i) use cross-entropy and report the gap with the answer-only baseline, or (ii) use the PMI score function, which already takes the probability of the answer into account. +- In the same way that we search for the best model hyper-parameters, future work should search over certain important evaluation design choices, such as the format of the prompt, and whether to convert the questions into declarative sentences. +- Lastly, we strongly encourage future work to report the variance of the observed results across different design choices. This can provide an indication of the robustness of the language models' performance on commonsense benchmarks. + +# 6 Related Work + +While recent work evaluates LMs against commonsense benchmarks in a zero- and few-shot fashion, they do not examine the extent to which model performance can be attributed to superficial cues or annotation artefacts in a given dataset (e.g., through strong baselines), nor do they quantify how robust the model performance is under different evaluation design choices. Trichelair et al. (2019); Elazar et al. (2021) investigate the existence of dataset bias in commonsense co-reference resolution benchmarks (Levesque et al., 2012; Sakaguchi et al., 2020) and SWAG (Zellers et al., 2018); here we conduct a more comprehensive investigation on four diverse commonsense benchmarks. + +Another line of work probe for commonsense knowledge in LMs through knowledge base completion (Petroni et al., 2019; Davison et al., 2019) or manually-designed probing tasks (Weir et al., 2020; Shwartz and Choi, 2020). Zhou et al. (2020) evaluate pre-trained LMs against commonsense benchmarks and propose a new dataset requiring multi-hop reasoning. In contrast, we focus on zero-and few-shot evaluation of commonsense understanding using the existing benchmarks. + +# 7 Conclusion + +We conduct a systematic and rigorous study of large LM performance on a diverse set of commonsense benchmarks, in a zero-shot and few-shot fashion. While pre-trained LMs can seemingly achieve a good zero-shot performance on these benchmarks, these results can be partially attributed to the LM's ability to exploit potential surface cues and annotation artefacts to guess the correct answer, without reasoning over the provided context. We further observed that substantially increasing model size yields rather small improvements on most commonsense benchmarks: Based on the scaling plots, achieving human-level performance requires much larger model sizes than what is currently feasible. In addition, model performance can be highly sensitive to certain evaluation design choices. Overall, our findings offer valuable insights and best practices for rigorously evaluating large LMs. + +# Ethical Considerations + +The primary aim of this paper is to conduct a systematic and rigorous commonsense evaluation of a large language model, which — in the case of this + +work — is achieved by using the pre-trained Gopher language model (Rae et al., 2021) with 280B parameters. Hence, the same risks stemming from large language model research are also broadly applicable to this work (Bender et al., 2021). We briefly discuss these ethical considerations below. + +Training compute. In practice, pre-training large language models like Gopher requires an enormous amount of compute, which may contribute to increased carbon emissions (Strubell et al., 2019; Patterson et al., 2021). In this work, we do not pretrain the language model from scratch, although we acknowledge that conducting inference and evaluation with large language models like Gopher still has substantial computational costs. Given the need to construct even-larger language models ( $>100$ trillion parameters) to achieve human-level performance on most of these benchmarks in an unsupervised fashion ( $\S 3.2$ ), we encourage future work to focus on potentially more efficient ways of acquiring commonsense knowledge directly from data, e.g., through multi-modal learning, grounding, and human interaction (Bisk et al., 2020a). + +Fairness and bias. Given the enormous size of the pre-training data — about 2 trillion tokens in the case of Gopher pre-training — it is conceivable that the training dataset may inadvertently contain toxic and biased material. Such toxic material — which is not always easily identifiable in the large training dataset — can in turn encourage the model to produce biased, harmful, or toxic output, especially when they are prompted with toxic text (Gehman et al., 2020). In fact, Rae et al. (2021) demonstrated that — up to a certain model size — larger language models may respond to toxic prompts with greater toxicity compared to smaller ones. Furthermore, the enormous size of the training data does not necessarily guarantee diversity: We expect the training data to contain a smaller proportion of vernacular or regional English that is used by underrepresented communities (Blodgett et al., 2016; Bender et al., 2021). Furthermore, the language model may also acquire harmful biases and stereotypes, e.g., assign lower probabilities to women becoming doctors as opposed to men (Rudinger et al., 2018; Cao and Daume III, 2021). + +Language model misuse. Our work highlights both the success and limitations of large language models at multiple commonsense benchmarks. Nevertheless, the success and expressive power + +of large language models come at the expense of potential misuse. Given their ability to generate realistic-looking — albeit not necessarily factual — content, large language models can also be used for malicious purposes. For instance, large language models can be used to generate convincing fake news (Zellers et al., 2019b), and more powerful generator can in turn generate even more convincing and influential fake news. Given the difficulty of manually distinguishing between human-generated text and machine-generated ones (Clark et al., 2021), how we can better detect and defend against malicious use of large language models is an important and exciting avenue for future work. + +# Limitations + +There are limitations to this work: first, we only assessed models' performance on multiple-choice questions (and not in a generative setting). Multiple choice problems have a more reliable automatic metric; in contrast, metrics used for generative tasks do not always accurately reflect human judgment (Clark et al., 2021) Second, we only evaluate the benchmarks on one family of models, the Gopher models and their variants; given the computational cost and also the lack of availability of different large language models (LLM), we cannot run our experiments on different model families than Gopher. However, we include zero-shot results on common-sense benchmarks from existing work on other LLMs in the paper (such as the GPT2 result in Table 7). Moreover, LLMs behave very similarly on various benchmarks, and we expect our results to generalize to other LLMs as well. Last but not least, we only evaluate models that are solely trained on language. Recent multimodal models have shown impressive performance on a range of tasks (Saharia et al., 2022). Will models trained on multiple modalities have more commonsense? We aim to answer this question in future work. + +# Acknowledgments + +We would like to thank Ivana Kajic, Laura Rimell for their detailed comments on our paper. Also, thanks to Stella Biderman and the anonymous reviewers for their helpful feedback. We also thank Jack W. Rae and the other authors from the Gopher paper for providing efficient evaluation pipelines for models from the Gopher family. + +# References + +Lisa Bauer and Mohit Bansal. 2021. Identify, align, and integrate: Matching knowledge graphs to commonsense reasoning tasks. EACL. +Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big? . In Proc. of FAccT. +Chandra Bhagavatula, Ronan Le Bras, Chaitanya Malaviya, Keisuke Sakaguchi, Ari Holtzman, Hannah Rashkin, Doug Downey, Wen tau Yih, and Yejin Choi. 2020. Abductive commonsense reasoning. In International Conference on Learning Representations. +Yonatan Bisk, Ari Holtzman, Jesse Thomason, Jacob Andreas, Yoshua Bengio, Joyce Chai, Mirella Lapata, Angeliki Lazaridou, Jonathan May, Aleksandr Nisnevich, Nicolas Pinto, and Joseph Turian. 2020a. Experience grounds language. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). +Yonatan Bisk, Rowan Zellers, Jianfeng Gao, Yejin Choi, et al. 2020b. Piqa: Reasoning about physical commonsense in natural language. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 7432-7439. +Su Lin Blodgett, Lisa Green, and Brendan O'Connor. 2016. Demographic dialectal variation in social media: A case study of African-American English. In Proc. of EMNLP. +Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, S. Buch, D. Card, Rodrigo Castellon, Niladri S. Chatterji, Annie Chen, Kathleen Creel, Jared Davis, Dora Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren E. Gillespie, Karan Goel, Noah D. Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas F. Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, O. Khattab, Pang Wei Koh, Mark S. Krass, Ranjay Krishna, Rohith Kuditipudi, Ananya Kumar, Faisal Ladhak, Mina Lee, Tony Lee, Jure Leskovec, Isabelle Levent, Xiang Lisa Li, Xuechen Li, Tengyu Ma, Ali Malik, Christopher D. Manning, Suvir P. Mirchandani, Eric Mitchell, Zanele Munyikwa, Suraj Nair, Avanika Narayan, Deepak Narayanan, Ben Newman, Allen Nie, Juan Carlos Niebles, Hamed Nilforoshan, J. F. Nyarko, Giray Ogut, Laurel Orr, Isabel Papadimitriou, Joon Sung Park, Chris Piech, Eva Portelance, Christopher Potts, Aditi Raghunathan, Robert Reich, Hongyu Ren, Frieda Rong, Yusuf H. Roohani, Camilo Ruiz, Jackson K. Ryan, Christopher R'e + +Dorsa Sadigh, Shiori Sagawa, Keshav Santhanam, Andy Shih, Krishna Parasuram Srinivasan, Alex Tamkin, Rohan Taori, Armin W. Thomas, Florian Tramér, Rose E. Wang, William Wang, Bohan Wu, Jiajun Wu, Yuhuai Wu, Sang Michael Xie, Michihiro Yasunaga, Jiaxuan You, Matei A. Zaharia, Michael Zhang, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zheng, Kaitlyn Zhou, and Percy Liang. 2021. On the opportunities and risks of foundation models. ArXiv, abs/2108.07258. +Michael Boratko, Xiang Lorraine Li, Rajarshi Das, Tim O'Gorman, Dan Le, and Andrew McCallum. 2020. Protoqa: A question answering dataset for prototypical common-sense reasoning. EMNLP 2020. +Antoine Bosselut, Ronan Le Bras, and Yejin Choi. 2021. Dynamic neuro-symbolic knowledge graph construction for zero-shot commonsense question answering. In Proceedings of the 35th AAAI Conference on Artificial Intelligence (AAAI). +Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli Celikyilmaz, and Yejin Choi. 2019. Comet: Commonsense transformers for automatic knowledge graph construction. In ACL. +Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877-1901. Curran Associates, Inc. +Yang Trista Cao and Hal Daumé III. 2021. Toward gender-inclusive coreference resolution: An analysis of gender and bias throughout the machine learning lifecycle*. Computational Linguistics. +Elizabeth Clark, Tal August, Sofia Serrano, Nikita Hahuong, Suchin Gururangan, and Noah A. Smith. 2021. All that's 'human' is not gold: Evaluating human evaluation of generated text. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 7282-7296, Online. Association for Computational Linguistics. +Joe Davison, Joshua Feldman, and Alexander M Rush. 2019. Commonsense knowledge mining from pretrained models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1173-1178. + +Yanai Elazar, Hongming Zhang, Yoav Goldberg, and Dan Roth. 2021. Back to square one: Bias detection, training and commonsense disentanglement in the winograd schema. arXiv preprint arXiv:2104.08161. +John H Flavell. 2004. Theory-of-mind development: Retrospect and prospect. *Merrill-Palmer Quarterly* (1982-), pages 274-290. +Samuel Gehman, Suchin Gururangan, Maarten Sap, Yejin Choi, and Noah A. Smith. 2020. RealToxicityPrompts: Evaluating neural toxic degeneration in language models. In *Findings of EMNLP*. +Jonathan Gordon and Benjamin Van Durme. 2013. Reporting bias and knowledge acquisition. In Proceedings of the 2013 workshop on Automated knowledge base construction, pages 25-30. +David Gunning. 2018. Machine common sense concept paper. arXiv preprint arXiv:1810.07528. +Fabian Caba Heilbron, Victor Escorcia, Bernard Ghanem, and Juan Carlos Niebles. 2015. Activitynet: A large-scale video benchmark for human activity understanding. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 961-970. +Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, and David Meger. 2018. Deep reinforcement learning that matters. In Proc. of AAAI. +Ari Holtzman, Peter West, Vered Schwartz, Yejin Choi, and Luke Zettlemoyer. 2021. Surface form competition: Why the highest probability answer isn't always right. arXiv preprint arXiv:2104.08315. +Lifu Huang, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2019. Cosmos qua: Machine reading comprehension with contextual commonsense reasoning. EMNLP, abs/1909.00277. +Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling laws for neural language models. CoRR, abs/2001.08361. +Philipp Koehn and Rebecca Knowles. 2017. Six challenges for neural machine translation. In NMT@ACL. +Mahnaz Koupae and William Yang Wang. 2018. Wikihow: A large scale text summarization dataset. arXiv preprint arXiv:1810.09305. +Hector Levesque, Ernest Davis, and Leora Morgenstern. 2012. The winograd schema challenge. In Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning. +Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. In Proc. of ACL-IJCNLP. + +Bill Yuchen Lin, Haitian Sun, Bhuwan Dhingra, Manzil Zaheer, Xiang Ren, and William W Cohen. 2021. Differentiable open-ended commonsense reasoning. *NAACL*. +Bill Yuchen Lin, Wangchunshu Zhou, Ming Shen, Pei Zhou, Chandra Bhagavatula, Yejin Choi, and Xiang Ren. 2020. CommonGen: A constrained text generation challenge for generative commonsense reasoning. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1823-1840, Online. Association for Computational Linguistics. +Chia-Wei Liu, Ryan Lowe, Iulian Serban, Mike Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How NOT to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. +Hugo Liu and Push Singh. 2004. Commonsense reasoning in and over natural language. In International Conference on Knowledge-Based and Intelligent Information and Engineering Systems, pages 293-306. Springer. +Nicholas Lourie, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2021. *Unicorn on rainbow: A universal commonsense reasoning model on a new multitask benchmark.* In AAAI. +John McCarthy et al. 1960. Programs with common sense. RLE and MIT computation center. +Gábor Melis, Chris Dyer, and Phil Blunsom. 2018. On the state of the art of evaluation in neural language models. In Proc. of ICLR. +Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2022. Rethinking the role of demonstrations: What makes in-context learning work? arXiv preprint. +Jekaterina Novikova, Ondrej Dušek, Amanda Cercas Curry, and Verena Rieser. 2017. Why we need new evaluation metrics for NLG. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. +Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In ACL. +David A. Patterson, Joseph Gonzalez, Quoc V. Le, Chen Liang, Lluis-Miquel Munguia, Daniel Rothchild, David R. So, Maud Texier, and Jeff Dean. 2021. Carbon emissions and large neural network training. CoRR, abs/2104.10350. +Mostofa Patwary, Mohammad Shoeybi, Patrick LeGresley, Shrimai Prabhumoye, Jared Casper, Vijay Korthikanti, Vartika Singh, Julie Bernauer, Michael Houston, Bryan Catanzaro, Shaden Smith, Brandon Norick, Samyam Rajbhandari, Zhun Liu, George + +Zerveas, Elton Zhang, Reza Yazdani Aminabadi, Xia Song, Yuxiong He, Jeffrey Zhu, Jennifer Cruzan, Umesh Madan, Luis Vargas, and Saurabh Tiwary. 2021. Using deepspeed and megatron to train megatron-turing nlg 530b, the world's largest and most powerful generative language model. +Fabio Petroni, Tim Rocktäschel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander H. Miller, and Sebastian Riedel. 2019. Language models as knowledge bases? In EMNLP. +Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. 2018. Hypothesis only baselines for natural language inference. In *The Seventh Joint Conference on Lexical and Computational Semantics (*SEM). +Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9. +Jack W. Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, H. Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, Eliza Rutherford, Tom Hennigan, Jacob Menick, Albin Cassirer, Richard Powell, George van den Driessche, Lisa Anne Hendricks, Maribeth Rauh, Po-Sen Huang, Amelia Glaese, Johannes Welbl, Sumanth Dathathri, Saffron Huang, Jonathan Uesato, John Mellor, Irina Higgins, Antonia Creswell, Nat McAleese, Amy Wu, Erich Elsen, Siddhant M. Jayakumar, Elena Buchatskaya, David Budden, Esme Sutherland, Karen Simonyan, Michela Paganini, Laurent Sifre, Lena Martens, Xiang Lorraine Li, Adhiguna Kuncoro, Aida Nematzadeh, Elena Gribovskaya, Domenic Donato, Angeliki Lazaridou, Arthur Mensch, Jean-Baptiste Lespiau, Maria Tsimpoukelli, Nikolai Grigorev, Doug Fritz, Thibault Sotiaux, Mantas Pajarskas, Toby Pohlen, Zhitao Gong, Daniel Toyama, Cyprien de Masson d'Autume, Yujia Li, Tayfun Terzi, Vladimir Mikulik, Igor Babuschkin, Aidan Clark, Diego de Las Casas, Aurelia Guy, Chris Jones, James Bradbury, Matthew Johnson, Blake A. Hechtman, Laura Weidinger, Jason Gabriel, William S. Isaac, Edward Lockhart, Simon Osindero, Laura Rimell, Chris Dyer, Oriol Vinyals, Kareem Ayoub, Jeff Stanway, Lorrayne Bennett, Demis Hassabis, Koray Kavukcuoglu, and Geoffrey Irving. 2021. Scaling language models: Methods, analysis & insights from training gopher. CoRR, abs/2112.11446. +Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender bias in coreference resolution. In Proc. of NAACL-HLT. +Chitwan Sahara, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S Sara Mahdavi, Rapha Gontijo Lopes, et al. 2022. Photorealistic text-to-image diffusion models with deep language understanding. arXiv preprint arXiv:2205.11487. + +Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2020. Winogrande: An adversarial winograd schema challenge at scale. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 8732-8740. +Maarten Sap, Ronan Le Bras, Emily Allaway, Chandra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A Smith, and Yejin Choi. 2019a. Atomic: An atlas of machine commonsense for if-then reasoning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 3027-3035. +Maarten Sap, Hannah Rashkin, Derek Chen, Ronan LeBras, and Yejin Choi. 2019b. Socialiqa: Common-sense reasoning about social interactions. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. +Vered Shwartz and Yejin Choi. 2020. Do neural language models overcome reporting bias? In COLING. +Vered Shwartz, Peter West, Ronan Le Bras, Chandra Bhagavatula, , and Yejin Choi. 2020. Unsupervised commonsense question answering with self-talk. In EMNLP. +Felix Stahlberg and Bill Byrne. 2019. On NMT search errors and model errors: Cat got your tongue? In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3354-3360, Hong Kong, China. Association for Computational Linguistics. +Emma Strubell, Ananya Ganesh, and Andrew McCallum. 2019. Energy and policy considerations for deep learning in NLP. In Proc. of ACL. +Paul Trichelair, Ali Emami, Adam Trischler, Kaheer Suleman, and Jackie Chi Kit Cheung. 2019. How reasonable are common-sense reasoning tasks: A case-study on the Winograd schema challenge and SWAG. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3382-3387, Hong Kong, China. Association for Computational Linguistics. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008. +Nathaniel Weir, Adam Poliak, and Benjamin Van Durme. 2020. Probing neural language models for human tacit assumptions. arXiv: Computation and Language. +Dani Yogatama, Cyprien de Masson d'Autume, Jerome Connor, Tomas Kocisky, Mike Chrzanowski, Lingpeng Kong, Angeliki Lazaridou, Wang Ling, Lei + +Yu, Chris Dyer, et al. 2019. Learning and evaluating general linguistic intelligence. arXiv preprint arXiv:1901.11373. +Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. Swag: A large-scale adversarial dataset for grounded commonsense inference. EMNLP. +Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019a. Hellaswag: Can a machine really finish your sentence? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. +Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi. 2019b. Defending against Neural Fake News. +Xuhui Zhou, Yue Zhang, Leyang Cui, and Dandan Huang. 2020. Evaluating commonsense in pretrained language models. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 9733-9740. + +# A Appendix Structure + +We begin by quantifying the scaling behavior of the model to predict how performance changes with larger model sizes (Appendix B). We then plot the relationship between cross-entropy and answer length for each of the four datasets (Appendix C). After that, we describe experiments that use knowledge base triplets as a form of in-context learning (Appendix D). Lastly, in Appendix E, we provide qualitative examples that show which examples: (i) all model sizes get right, (ii) all model sizes get wrong, and (iii) only the larger models get right. + +# B Scaling Behavior + +When we estimate the performance needed to reach human-level performance, we fit a linear model to estimate accuracy from $\log (\text{params})$ . We derive the human performance from each respective paper and/or leaderboard. For HellaSwag and PIQA, human-level performance is at $95\%$ . For Wino-Grande, it is at $94\%$ and for Social IQa it is at $84\%$ . On HellaSwag, we predict that 1.4T parameters are needed to achieve human-level performance; on PIQA we predict 102T parameters; on Wino-Grande we predict over 2000 Trillion parameters. Social IQa scales particularly poorly, and we estimate over $10^{18}$ parameters being needed. + +# C Cross-entropy vs answer length for all datasets + +![](images/dfb270035695152c920d74291718a19a839dc3e73bf74b52f64e8f4ea7761e86.jpg) +(a) Answer length vs cross-entropy (average log probability across tokens) for PIQA. + +![](images/b58358de82df42526beda71ee61aa420d04ebcc4b90123cc8b5cf3a0c0c505ff.jpg) +(b) Answer length vs cross-entropy (average log probability across tokens) for SocialIQA. + +![](images/67dde480d5128e3442ce409a3d13d9690763fa64c715597f6588cf886081dfba.jpg) +(a) Answer length vs cross-entropy (average log probability across tokens) for HellaSWAG. + +![](images/ebfd18b8b7a6c4ff252a5c9302bc669b214ec6b4fba94d90d141bdc49b8d02d6.jpg) +(b) Answer length vs cross-entropy (average log probability across tokens) for Winogrande. + +# D Commonsense Knowledge Bases + +Given the implicit nature of commonsense knowledge, a language model's pretraining corpora might not contain all of the supporting evidence that is required to answer commonsense understanding questions — a phenomenon widely known as the reporting bias problem (Gordon and Van Durme, 2013). Thus, prior work has proposed to use external knowledge bases for improving the zero-shot performance of LMs on commonsense benchmarks (Bosselut et al., 2021; Bauer and Bansal, 2021). These approaches are particularly interesting, as the knowledge base augmentation only happens at test time, rendering this approach compatible with any pretrained generative LM. While prior work has shown the effectiveness of this approach over the zero-shot baseline that lacks access to commonsense knowledge bases (CSKBs), we find that the performance of the baseline model is highly sensitive to certain evaluation design choices ( $\S 5$ ). A natural question, therefore, is the following: If we carefully optimize the evaluation design choices of the baseline model, would we still observe similar improvements through CSKB augmentation? + +Setup. To answer this, we replicate prior work by adding commonsense knowledge base entries at test time; such knowledge base triplets can potentially provide the relevant implicit commonsense knowledge that makes the correct answer more likely than the rest. To ensure the generality of our findings, we apply this approach to multiple model sizes that we explored in §3.2. Here we consider the pre-extracted knowledge base triplets that are made publicly available by Shwartz et al. (2020). We use a similar score function as Shwartz et al. (2020), where, for each answer choice $\mathbf{y} \in Y(\mathbf{x})$ , we choose the knowledge base triplet that yields the highest score: + +$$ +s _ {k g} (\mathbf {y} | \mathbf {x}) \triangleq \sum_ {\mathbf {t} \in T} s (\mathbf {y}; \mathbf {t} | \mathbf {x}) \approx m a x _ {\mathbf {t} \in T} s (\mathbf {y}; \mathbf {t} | \mathbf {x}), +$$ + +where $s(\mathbf{y};\mathbf{t}|\mathbf{x})$ denotes the cross-entropy of the concatenated answer choice $\mathbf{y}$ and the extracted knowledge base triplet $\mathbf{t}$ , conditional on the question/context $\mathbf{x}$ . Here $T$ denotes the set of all extracted commonsense knowledge triplets, which are generated from Comet (Bosselut et al., 2019). + +
ZSw/t Cometw/t Atomicw/t CN
44M42.342.942.340.6
117M43.644.043.642.2
400M46.346.844.744.1
1.3B47.046.846.444.7
7B48.548.647.546.1
ZSw/t CometSelf-Talk
GPT241.11347.546.2
+ +Table 7: Zero-shot performance on Social IQa when using different knowledge bases. GPT2 results are taken from Shwartz et al. (2020). ZS: zero-shot performance; CN: ConceptNet. We do not include the Gopher results with 280B parameters — due to computational considerations and much slower inference. + +One key difference is that we score the answer and knowledge base triplet conditional on the question, whereas Shwartz et al. (2020) scored the concatenation of question, answer, and triplet instead. + +In Table 7, we summarize our results on Social IQa, which has the highest gap between the zero-shot and SOTA performance (Fig. 2). We compare our results with those of Shwartz et al. (2020), who used GPT2 as the base model. Our results in Table 7 provide an interesting contrast to the findings of Shwartz et al. (2020): Our baseline zero-shot model with 1.3B parameters achieves an accuracy of $47.0\%$ on Social IQa, substantially outperforming the reported GPT2 result of Shwartz et al. (2020) — which achieves $41.1\%$ — despite the fact that GPT2 has more parameters (1.5B vs our 1.3B). In fact, the same 1.3B zero-shot model — which does not benefit from any commonsense knowledge base triplets — nearly matches the performance of GPT2 augmented with Comet (Bosselut et al., 2019) ( $47.0\%$ for our zero-shot 1.3B model vs $47.5\%$ for GPT2 augmented with COMET; Table 7), and also outperforms the GPT2 model that is augmented with self-talk. Nevertheless, we find that adding knowledge base triplets fails to yield substantial improvements for our models; this finding is consistent across three different knowledge bases and five model sizes. On the contrary, adding such knowledge base triplets can occasionally decrease performance compared to the zero-shot baseline. + +We remark on two significant aspects of our findings. First, it is important to compare proposed improvements against strong, well-tuned baselines + +(Henderson et al., 2018; Melis et al., 2018), which can achieve surprisingly competitive performance. We identify the choice of the scored span as a particularly important design choice: Whereas Shwartz et al. (2020) scored the GPT2 model on the concatenation of both question and answer, we instead calculate the cross-entropy of the answer given the question. Second, certain improvements that are observed under a particular set of evaluation design choices may not necessarily be replicated under a different set. This finding reiterates the importance of explicitly stating the evaluation design choices used in each experiment, and identifying whether or not the observed improvements are robust across different evaluation design choices ( $\S 5$ ). + +# E Examples + +# E.1 Social IQa + +# All Models Incorrect + +```txt +{'context': "Tracy didn't go home that evening and resisted Riley's attacks.", 'question': 'What does Tracy need to do before this?', 'answerA': 'make a new plan', 'answerB': 'Go home and see Riley', 'answerC': 'Find somewhere to go', 'correct': 'C'} +``` + +```jsonl +{'context': 'Aubrey kept the baby up at night to watch for a concussion.》, 'question': 'What will happen to Aubrey?', 'answerA': "The baby fell asleep despite Aubrey's best effort", 'answerB': 'gets so sleepy but stays awake anyway', 'answerC': 'and the baby both fell asleep late in the night', 'correct': 'B'} +``` + +# All Models Correct + +```javascript +{'context': 'Kendall opened their mouth to speak and what came out shocked everyone.'', 'question': 'How would you describe Kendall?'}, +``` + +```python +'answerA': 'a very quiet person', +'answerB': 'a very passive person', +'answerC': 'a very aggressive and talkative person', +'correct': 'C'} +``` + +```txt +{'context': 'Sydney went to our family farm, taking the trash with her, and set it on fire on the ground.'', 'question': 'How would Sydney feel afterwards?', 'answerA': 'feeling strong', 'answerB': 'burning down', 'answerC': 'upset because the fire has gotten out of control', 'correct': 'C'} +{'context': Robin always gets pizza on the way home from work for her family on Fridays ', 'question': 'What will Robin want to do next?', 'answerA': 'pick up the pizza', 'answerB': 'complain to the others', 'answerC': 'finish work', 'correct': 'A'} +``` + +Larger Models Correct The 1.4B, 7.1B, and 280B model all got the following correct: + +```txt +{'context': 'Alex paid extra money to get more secret details about the game strategy.', 'question': 'What will Alex want to do next?', 'answerA': 'play the game more', 'answerB': 'ignore the advice', 'answerC': 'stop playing the video game', 'correct': 'A'} +``` + +The 417M, 7.1B, and 280B model all got the following correct: + +```txt +{'context': 'Kai and Skylar were good friends. Kai had finally worked up the courage to ask Skylar on a date. They gave +``` + +```txt +Skylar a meaningful gift to test the waters. 'question': 'What will Kai want to do next?', answerA': 'say thank you for the gift', answerB': 'Find out whether Skylar reciprocates the feelings', answerC': "Tell Skylar they'd like to just be friends", correct': 'B'} +``` + +# E.2 WinoGrande + +# All Models Incorrect + +```txt +{'label':1, 'option1':Tanya', 'option2':'Sarah', 'sentence':Tanya was unrecognizable after Sarah was done beating them, so _ ended up going to jail.'} +{'label':1, 'option1':Logan', 'option2':'Justin', 'sentence':'After Logan pitched a ball that got clobbered for a home run by Justin in a baseball game,_felt exultant.'} +``` + +# All Models Correct + +```txt +{'label':1, 'option1':'sausage', 'option2':'ball', 'sentence':b'When the dog behaves I like to give him a sausage otherwise I give him a ball. I gave him the _ since he was bad.'} +{'label':1, 'option1':'Kayla', 'option2':'Natalie', 'sentence':'Kayla always wears sunscreen outdoors but Natalie doesn't because _ isn't concerned about getting neck wrinkles.'} +``` + +Only Large Models Correct Models 400M and larger got the following correct: + +```txt +{'label':0, 'option1':'Nick', 'option2':'Ryan', 'sentence':'Nick did not like sauces made from tomato, only creamy sauces. Ryan knew this so he only made white sauce when _ came over.'} +``` + +```txt +Models 1.4B and larger got the following correct: +{ 'label ': 0, +'option1 ': 'Adam', +'option2 ': 'Jason', +'sentence ': Adam loved dogs but Jason was afraid of them, so only _ petted the poodle.'} +``` \ No newline at end of file diff --git a/asystematicinvestigationofcommonsenseknowledgeinlargelanguagemodels/images.zip b/asystematicinvestigationofcommonsenseknowledgeinlargelanguagemodels/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..b659c403e1d16fdb9c039df3ebad4db55ca3008b --- /dev/null +++ b/asystematicinvestigationofcommonsenseknowledgeinlargelanguagemodels/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:537caefc30abc4cde8d86041d6b1813b947c56b0c333475354d3e6573915a433 +size 361523 diff --git a/asystematicinvestigationofcommonsenseknowledgeinlargelanguagemodels/layout.json b/asystematicinvestigationofcommonsenseknowledgeinlargelanguagemodels/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..57774c85cadbf4f45d3663d5aa787c466b3d6556 --- /dev/null +++ b/asystematicinvestigationofcommonsenseknowledgeinlargelanguagemodels/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ad153156e8f5ca03a2e55c7edcad7f6b72f9c4c1493e1f40b6edf36970430df0 +size 473898 diff --git a/atemplatebasedmethodforconstrainedneuralmachinetranslation/0f5276f1-4ab8-40a2-9e6b-750d47a39bae_content_list.json b/atemplatebasedmethodforconstrainedneuralmachinetranslation/0f5276f1-4ab8-40a2-9e6b-750d47a39bae_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..55baf1acd545b50d3f62b5e91793f110f7bc38b4 --- /dev/null +++ b/atemplatebasedmethodforconstrainedneuralmachinetranslation/0f5276f1-4ab8-40a2-9e6b-750d47a39bae_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:defcee880c4230d5209b462425218a7132cea0aaf150a2c9fdd819cb8344a9a6 +size 104231 diff --git a/atemplatebasedmethodforconstrainedneuralmachinetranslation/0f5276f1-4ab8-40a2-9e6b-750d47a39bae_model.json b/atemplatebasedmethodforconstrainedneuralmachinetranslation/0f5276f1-4ab8-40a2-9e6b-750d47a39bae_model.json new file mode 100644 index 0000000000000000000000000000000000000000..b2d187250747b9837396a6e21f592baabe92e8b4 --- /dev/null +++ b/atemplatebasedmethodforconstrainedneuralmachinetranslation/0f5276f1-4ab8-40a2-9e6b-750d47a39bae_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:48b3c10c26f07c7cf539f5cad34b5670ab46b164620a01a5fa80f65c0cc7cc78 +size 126351 diff --git a/atemplatebasedmethodforconstrainedneuralmachinetranslation/0f5276f1-4ab8-40a2-9e6b-750d47a39bae_origin.pdf b/atemplatebasedmethodforconstrainedneuralmachinetranslation/0f5276f1-4ab8-40a2-9e6b-750d47a39bae_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..f9e38d283922ddfc7f4cf7b16bfa2e67f76a46c5 --- /dev/null +++ b/atemplatebasedmethodforconstrainedneuralmachinetranslation/0f5276f1-4ab8-40a2-9e6b-750d47a39bae_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7f9b68f1ff21a54fe5e7ca62e333f2c2b7741c4393201ac24d5191992925368b +size 2210147 diff --git a/atemplatebasedmethodforconstrainedneuralmachinetranslation/full.md b/atemplatebasedmethodforconstrainedneuralmachinetranslation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..d70968372b94663121daba527b691d73ba1fb2ad --- /dev/null +++ b/atemplatebasedmethodforconstrainedneuralmachinetranslation/full.md @@ -0,0 +1,446 @@ +# A Template-based Method for Constrained Neural Machine Translation + +Shuo Wang $^{1}$ Peng Li $^{2*}$ Zhixing Tan $^{6}$ Zhaopeng Tu $^{7}$ Maosong Sun $^{1,4}$ Yang Liu $^{1,2,3,4,5*}$ + +$^{1}$ Dept. of Comp. Sci. & Tech., Institute for AI, Tsinghua University, Beijing, China + +1Beijing National Research Center for Information Science and Technology + +$^{2}$ Institute for AI Industry Research, Tsinghua University, Beijing, China + +$^{3}$ Beijing Academy of Artificial Intelligence, Beijing, China + +$^{4}$ International Innovation Center of Tsinghua University, Shanghai, China + +$^{5}$ Quan Cheng Laboratory $^{6}$ Zhongguancun Laboratory, Beijing, P.R.China $^{7}$ Tencent AI Lab + +# Abstract + +Machine translation systems are expected to cope with various types of constraints in many practical scenarios. While neural machine translation (NMT) has achieved strong performance in unconstrained cases, it is non-trivial to impose pre-specified constraints into the translation process of NMT models. Although many approaches have been proposed to address this issue, most existing methods can not satisfy the following three desiderata at the same time: (1) high translation quality, (2) high match accuracy, and (3) low latency. In this work, we propose a template-based method that can yield results with high translation quality and match accuracy and the inference speed of our method is comparable with unconstrained NMT models. Our basic idea is to rearrange the generation of constrained and unconstrained tokens through a template. Our method does not require any changes in the model architecture and the decoding algorithm. Experimental results show that the proposed template-based approach can outperform several representative baselines in both lexically and structurally constrained translation tasks. $^{1}$ + +# 1 Introduction + +Constrained machine translation is of important value for a wide range of practical applications, such as interactive translation with user-specified lexical constraints (Koehn, 2009; Li et al., 2020; Jon et al., 2021), domain adaptation with in-domain dictionaries (Michon et al., 2020; Niehues, 2021), and webpage translation with markup tags as structural constraints (Hashimoto et al., 2019; Hanneman and Dinu, 2020). Developing constrained neural machine translation (NMT) approaches can make NMT models applicable to more real-world scenarios (Bergmanis and Pinnis, 2021). + +However, it is challenging to directly impose constraints for NMT models due to their end-to-end nature (Post and Vilar, 2018). In accordance with this problem, a branch of studies modifies the decoding algorithm to take the constraints into account when selecting candidates (Hokamp and Liu, 2017; Hasler et al., 2018; Post and Vilar, 2018; Hu et al., 2019; Hashimoto et al., 2019). Although constrained decoding algorithms can guarantee the presence of constrained tokens, they can significantly slow down the translation process (Wang et al., 2022) and can sometimes result in poor translation quality (Zhang et al., 2021). + +Another branch of works constructs synthetic data to help NMT models acquire the ability to translate with constraints (Song et al., 2019; Dinu et al., 2019; Michon et al., 2020). For instance, Hanneman and Dinu (2020) propose to inject markup tags into plain parallel texts to learn structurally constrained NMT models. The major drawback of data augmentation based methods is that they sometimes violate the constraints (Hanneman and Dinu, 2020; Chen et al., 2021), limiting their application in constraint-critical situations. + +In this work, we use free tokens to denote the tokens that are not covered by the provided constraints. Our motivation is to decompose the whole constrained translation task into the arrangement of constraints and the generation of free tokens. The constraints can be of many types, ranging from phrases in lexically constrained translation to markup tags in structurally constrained translation. Intuitively, only arranging the provided constraints into the proper order is much easier than generating the whole sentence. Therefore, we build a template by abstracting free token fragments into nonterminals, which are used to record the relative position of all the involved fragments. The template can be treated as a plan of the original sentence. The arrangement of constraints can be learned through a template generation sub-task. + +Once the template is generated, we need some derivation rules to convert the nonterminals mentioned above into free tokens. Each derivation rule shows the correspondence between a nonterminal and a free token fragment. These rules can be learned by the NMT model through semi-structured data. We call this sub-task template derivation. During inference, the model firstly generates the template and then extends each nonterminal in the template into natural language text. Note that the two proposed sub-tasks can be accomplished through a single decoding pass. Thus the decoding speed of our method is comparable with unconstrained NMT systems. By designing template format, our approach can cope with different types of constraints, such as lexical constraints, XML structural constraints, or Markdown constraints. + +Contributions In summary, the contributions of this work can be listed as follows: + +- We propose a novel template-based constrained translation framework to disentangle the generation of constraints and free tokens. +- We instantiate the proposed framework with both lexical and structural constraints, demonstrating the flexibility of this framework. +- Experiments show that our method can outperform several strong baselines, achieving high translation quality and match accuracy while maintaining the inference speed. + +# 2 Related Work + +# 2.1 Lexically Constrained Translation + +Several researchers direct their attention to modifying the decoding algorithm to impose lexical constraints (Hasler et al., 2018). For instance, Hokamp and Liu (2017) propose grid beam search (GBS) that organizes candidates in a grid, which enumerates the provided constrained tokens at each decoding step. However, the computation complexity of GBS scales linearly with the number of constrained tokens. To reduce the runtime complexity, Post and Vilar (2018) propose dynamic beam allocation (DBA), which divides a fixed size of beam for candidates having met the same number of constraints. Hu et al. (2019) propose to vectorize DBA further. The resulting VDBA algorithm is still significantly slower compared with the vanilla beam search algorithm (Wang et al., 2022). + +Another line of studies trains the model to copy the constraints through data augmentation. Song et al. (2019) propose to replace the corresponding source phrases with the target constraints, and Dinu et al. (2019) propose to insert target constraints as inline annotations. Some other works propose to append target constraints to the whole source sentence as side constraints (Chen et al., 2020; Niehues, 2021; Jon et al., 2021). Although these methods introduce little additional computational overhead at inference time, they can not guarantee the appearance of the constraints (Chen et al., 2021). Xiao et al. (2022) transform constrained translation into a bilingual text-infilling task. A limitation of text-infilling is that it can not reorder the constraints, which may negatively affect the translation quality for distinct language pairs. + +Recently, some researchers have tried to adapt the architecture of NMT models for this task. Susanto et al. (2020) adopt non-autoregressive translation models (Gu et al., 2019) to insert target constraints. Wang et al. (2022) prepend vectorized keys and values to the attention modules (Vaswani et al., 2017) to integrate constraints. However, their model may still suffer from low match accuracy when decoding without VDBA. In this work, our method can achieve high translation quality and match accuracy without significantly increasing the inference overhead. + +# 2.2 Structurally Constrained Translation + +Structurally constrained translation is useful since text data is often wrapped with markup tags on the Web (Hashimoto et al., 2019), which is an essential source of information for humans. Compared with lexically constrained translation, structurally constrained translation is relatively unexplored. Joanis et al. (2013) examine a two-stage method for statistical machine translation systems, which firstly translates the plain text and then injects the tags based on phrase alignments and some carefully designed rules. Moving to the NMT paradigm, large-scale parallel corpora with structurally aligned markup tags are scarce. Hanneman and Dinu (2020) propose to inject tags into plain text to create synthetic data. Hashimoto et al. (2019) collect a parallel dataset consisting of structural text translated by human experts. Zhang et al. (2021) propose a constrained decoding algorithm to translate structured text. However, their method significantly slows down the translation process. + +In this work, our approach can be easily extended for structural constraints, leaving the decoding algorithm unchanged. The template in our approach can be seen as an intermediate plan, which has been investigated in the field of data-to-text generation (Moryossef et al., 2019). Zhang et al. (2019) also explored the idea of disentangling different parts in a sentence using special tokens. + +# 3 Approach + +# 3.1 Template-based Machine Translation + +Given a source-language sentence $\mathbf{x} = x_{1}\dots x_{I}$ and a target-language sentence $\mathbf{y} = y_{1}\dots y_{J}$ , an NMT model is trained to estimate the conditional probability $P(\mathbf{y}|\mathbf{x};\pmb {\theta})$ , which can be given by + +$$ +P (\mathbf {y} | \mathbf {x}; \boldsymbol {\theta}) = \prod_ {j = 1} ^ {J} P (y _ {j} | \mathbf {x}, \mathbf {y} _ {< j}; \boldsymbol {\theta}), \tag {1} +$$ + +where $\theta$ is the set of parameters to optimize and $\mathbf{y}_{< j}$ is the partial translation at the $j$ -th step. + +In this work, we firstly build a template to simplify the whole sentence. Formally, we use s and t to represent the source- and target-side templates, respectively. In the template, free token fragments are abstracted into nonterminals. We use e and f to denote the derivation rules of the nonterminals for the source and target template, respectively. + +The model is trained on two sub-tasks. Firstly, the model learns to generate the target template $\mathbf{t}$ : + +$$ +P (\mathbf {t} | \mathbf {s}, \mathbf {e}; \boldsymbol {\theta}) = \prod_ {j = 1} ^ {T} P (t _ {j} | \mathbf {s}, \mathbf {e}, \mathbf {t} _ {< j}; \boldsymbol {\theta}). \tag {2} +$$ + +Secondly, we train the same model to estimate the conditional probability of $\mathbf{f}$ : + +$$ +P (\mathbf {f} | \mathbf {s}, \mathbf {e}, \mathbf {t}; \boldsymbol {\theta}) = \prod_ {j = 1} ^ {F} P (f _ {j} | \mathbf {s}, \mathbf {e}, \mathbf {t}, \mathbf {f} _ {< j}; \boldsymbol {\theta}). \quad (3) +$$ + +The target sentence $\mathbf{y}$ can be reconstructed by extending each nonterminal in $\mathbf{t}$ using the corresponding derivation rule in $\mathbf{f}$ . We can jointly learn the two sub-tasks in one pass to improve both the training and inference efficiency. Formally, the model is trained to maximize the following joint probability of $\mathbf{t}$ and $\mathbf{f}$ in practice: + +$$ +P (\mathbf {t}, \mathbf {f} | \mathbf {s}, \mathbf {e}; \boldsymbol {\theta}) = P (\mathbf {t} | \mathbf {s}, \mathbf {e}; \boldsymbol {\theta}) \times P (\mathbf {f} | \mathbf {s}, \mathbf {e}, \mathbf {t}; \boldsymbol {\theta}). \tag {4} +$$ + +# 3.2 Template for Lexical Constraints + +In lexically constrained translation, some source phrases in the input sentence are required to be translated into pre-specified target phrases. For a source sentence $\mathbf{x}$ , we use $\{\langle \mathbf{u}^{(n)},\mathbf{v}^{(n)}\rangle \}_{n = 1}^{N}$ to denote the given constraint pairs, where $\mathbf{u}^{(n)}$ is the $n$ -th source constraint, and $\mathbf{v}^{(n)}$ is the corresponding target constraint. All the $N$ source constraints can divide $\mathbf{x}$ into $2N + 1$ fragments: + +$$ +\mathbf {x} = \mathbf {p} ^ {(0)} \mathbf {u} ^ {(1)} \mathbf {p} ^ {(1)} \dots \mathbf {u} ^ {(N)} \mathbf {p} ^ {(N)}, \tag {5} +$$ + +where $\mathbf{p}^{(n)}$ is the $n$ -th free token fragment. We can set $\mathbf{p}^{(0)}$ to an empty string to represent sentences that start with a constraint, and set $\mathbf{p}^{(N)}$ to an empty string for sentences that end with a constraint. We can also set $\mathbf{p}^{(n)}$ to an empty string for the cases where $\mathbf{u}^{(n)}$ and $\mathbf{u}^{(n + 1)}$ are adjacent in $\mathbf{x}$ . Similarly, the target sentence can be represented by + +$$ +\mathbf {y} = \mathbf {q} ^ {(0)} \mathbf {v} ^ {(i _ {1})} \mathbf {q} ^ {(1)} \dots \mathbf {v} ^ {(i _ {N})} \mathbf {q} ^ {(N)}, \tag {6} +$$ + +where $\mathbf{q}^{(n)}$ is the $n$ -th free token fragment in the target sentence $\mathbf{y}$ . We use $i_1, \dots, i_N$ to denote the order of the constraints in $\mathbf{y}$ . The $n$ -th index $i_n$ is not necessarily equal to $n$ , since the order of the constraints in the target sentence $\mathbf{y}$ is often different from that in the source sentence $\mathbf{x}$ . + +We then abstract each fragment of text into nonterminals to build the template for lexically constrained translation. Concretely, the $n$ -th free token fragment in the source sentence $\mathbf{x}$ is abstracted into $\mathrm{X}_n$ , for each $n \in \{0, \dots, N\}$ . The $n$ -th free token fragment in the target sentence is abstracted into $\mathrm{Y}_n$ , for each $n \in \{0, \dots, N\}$ . In order to indicate the alignment between corresponding source and target constraints, we abstract $\mathbf{u}_n$ and $\mathbf{v}_n$ into the same nonterminal $\mathrm{C}_n$ . Note that $\mathrm{X}_n$ and $\mathrm{Y}_n$ are not linked nonterminals, since fragments of free tokens are not bilangually aligned. The resulting source- and target-side templates are given by + +$$ +\mathbf {s} = \mathrm {X} _ {0} \mathrm {C} _ {1} \mathrm {X} _ {1} \dots \mathrm {C} _ {N} \mathrm {X} _ {N}, +$$ + +$$ +\mathbf {t} = \mathrm {Y} _ {0} \mathrm {C} _ {i _ {1}} \mathrm {Y} _ {1} \dots \mathrm {C} _ {i _ {N}} \mathrm {Y} _ {N}. \tag {7} +$$ + +We need to define some derivation rules to convert the template into a natural language sentence. The derivation of nonterminals can be seen as the inverse of the abstraction process. Thus the derivation of the target-side template $\mathbf{t}$ would be + +$$ +\mathrm {C} _ {n} \rightarrow \mathbf {v} ^ {(n)} \quad \text {f o r e a c h} n \in \{1, \dots , N \}, \tag {8} +$$ + +$$ +\mathrm {Y} _ {n} \to \mathbf {q} ^ {(n)} \quad \text {f o r e a c h} n \in \{0, \dots , N \}. +$$ + +![](images/b46cc2a8413d77b8500edde4455781732628b267e5908bfebeb9b5ee2fbad5eb.jpg) +Figure 1: Example for lexically constrained translation. The constraints are (周杰伦, Jay Chou) and (七里香, Orange Jasmine). Note that $\mathrm{X}_n$ and $\mathrm{Y}_n$ are not linked nonterminals, since the source and target free token fragments are not necessarily aligned. The derivation rule $\mathrm{X}_0 \rightarrow$ 歌曲 is learned through the concatenation of $\mathrm{X}_0$ and歌曲 (i.e., $\mathrm{X}_0$ 歌曲). “ $\phi$ ” denotes an empty string. See Section 3.2 for more details. + +The derivation of the source-side template can be defined similarly. Note that $\mathrm{C}_n$ produces the $n$ -th source constraint $\mathbf{u}_n$ at the source side while producing the target constraint $\mathbf{v}_n$ at the target side. In order to make the derivation rules learnable by NMT models, we propose to use the concatenation of the nonterminal and the corresponding sequence of terminals to denote each derivation rule. For example, we use $\mathrm{Y}_n\mathbf{q}^{(n)}$ to represent $\mathrm{Y}_n \to \mathbf{q}^{(n)}$ . We use $\mathbf{d}$ and $\mathbf{f}$ to denote the derivation of constraints and free tokens at the target side, respectively: + +$$ +\mathbf {d} = \mathrm {C} _ {1} \mathbf {v} ^ {(1)} \dots \mathrm {C} _ {N} \mathbf {v} ^ {(N)}, \tag {9} +$$ + +$$ +\mathbf {f} = \mathrm {Y} _ {0} \mathbf {q} ^ {(0)} \dots \mathrm {Y} _ {N} \mathbf {q} ^ {(N)}. +$$ + +At the source side, we use c and e to denote the derivation of constraints and free tokens, respectively. c and e can be defined similarly. Since the constraints are pre-specified by the users, the model only needs to learn the derivation of free tokens. To this end, we place the derivation of constraint-related nonterminals before the template as a conditional prefix. Then the model learns the generation of the template and the derivation of free tokens, step by step. + +The final format of the input and output sequences at training time can be given by + +$$ +\begin{array}{l} \mathbf {x} ^ {\prime} = \mathbf {c} < \operatorname {s e p} > \mathbf {s} < \operatorname {s e p} > \mathbf {e}, \\ \quad^ {\prime} \end{array} \tag {10} +$$ + +$$ +\mathbf {y} ^ {\prime} = \mathbf {d} < \operatorname {s e p} > \mathbf {t} < \operatorname {s e p} > \mathbf {f}, +$$ + +respectively. We use the delimiter $<\mathrm{sep}>$ to separate the template and the derivations. Figure 1 gives an example of both $\mathbf{x}'$ and $\mathbf{y}'$ . At inference time, we feed $\mathbf{x}'$ to the encoder, and provide "d $<\mathrm{sep}>$ " to the decoder as the constrained prefix. Then the model generates the remaining part of $\mathbf{y}'$ (i.e., "t $<\mathrm{sep}>\mathbf{f}''$ ). + +![](images/d909e9363f79debe295720dd7abddba8caf6d0d0139b3e9724347ddf0f8b4c3a.jpg) +Figure 2: The template can be converted into a natural language sentence by replacing the nonterminals according to the corresponding derivation rules. + +Figure 2 explains the way we convert the output sequence into a natural language sentence. The conversion from the template to the target-language sentence can be done through a simple script, and the computational cost caused by the conversion is negligible, compared with the model inference. + +Note that we also abstract the constraints when building the template. The reason is that the model only needs to generate the order of constraints in this way, rather than copy all the specific tokens, which may suffer from copy failure (Chen et al., 2021). The formal representation for our lexically constrained model is slightly different from that defined in Eq. (4), which should be changed into + +$$ +\begin{array}{l} P (\mathbf {t}, \mathbf {f} | \mathbf {c}, \mathbf {s}, \mathbf {e}, \mathbf {d}; \boldsymbol {\theta}) \\ = P (\mathbf {t} | \mathbf {c}, \mathbf {s}, \mathbf {e}, \mathbf {d}; \boldsymbol {\theta}) \times P (\mathbf {f} | \mathbf {c}, \mathbf {s}, \mathbf {e}, \mathbf {d}, \mathbf {t}; \boldsymbol {\theta}). \\ \end{array} +$$ + +![](images/b3df7a701335143f12476854698f25db2a276f43e6bdf771911391d8dbb507cb.jpg) +Figure 3: Example for structurally constrained translation. The markup tags are reserved in the template, while free tokens are abstracted. Note that $\mathrm{X}_n$ and $\mathrm{Y}_n$ are not linked nonterminals. See Section 3.3 for more details. + +# 3.3 Template for Structural Constraints + +The major challenge of structured text translation is to maintain the correctness of the structure, which is often indicated by markup tags (Hashimoto et al., 2019). The proposed framework can also deal with structurally constrained translation. Similarly, we replace free token fragments with nonterminals to build the template, where the markup tags are reserved. Figure 3 shows an example. Formally, given a sentence pair $\langle \mathbf{x},\mathbf{y}\rangle$ with $N$ markup tags, the source- and target-side templates are given by + +$$ +\mathbf {s} = \mathrm {X} _ {0} < \operatorname {t a g} _ {1} > \mathrm {X} _ {1} \dots < \operatorname {t a g} _ {N} > \mathrm {X} _ {N}, \tag {12} +$$ + +$$ +\mathbf {t} = \mathrm {Y} _ {0} < \text {t a g} _ {i _ {1}} > \mathrm {Y} _ {1} \dots < \text {t a g} _ {i _ {N}} > \mathrm {Y} _ {N}, +$$ + +respectively. The order of markup tags at the target side (i.e., $i_1 \cdots i_N$ ) may be different from that at the source side (i.e., $1 \cdots N$ ). + +For each $n \in \{0, \dots, N\}$ , $\mathrm{X}_n$ can be derived into the $n$ -th source-side free token fragment $\mathbf{p}^{(n)}$ , and $\mathrm{Y}_n$ can be extended into the target-side free token fragment $\mathbf{q}^{(n)}$ . $\mathrm{X}_n$ and $\mathrm{Y}_n$ are not linked. The derivation sequences can be defined as + +$$ +\mathbf {e} = \mathrm {X} _ {0} \mathbf {p} ^ {(0)} \dots \mathrm {X} _ {N} \mathbf {p} ^ {(N)}, \tag {13} +$$ + +$$ +\mathbf {f} = \mathrm {Y} _ {0} \mathbf {q} ^ {(0)} \dots \mathrm {Y} _ {N} \mathbf {q} ^ {(N)}. +$$ + +The format of the input and output would be + +$$ +\mathbf {x} ^ {\prime} = \mathbf {s} < \operatorname {s e p} > \mathbf {e}, \tag {14} +$$ + +$$ +\mathbf {y} ^ {\prime} = \mathbf {t} < \operatorname {s e p} > \mathbf {f}, +$$ + +respectively. Figure 3 illustrates an example for both $\mathbf{x}'$ and $\mathbf{y}'$ . The formal representation of our structurally constrained model is the same as Eq. (4). The model arranges the markup tags when generating $\mathbf{t}$ and completes the whole sentence when generating $\mathbf{f}$ , which is consistent with our motivation to decompose the whole task into constraint arrangement and free token generation. + +# 4 Lexically Constrained Translation + +# 4.1 Setup + +Parallel Data We conduct experiments on two language pairs, including English-Chinese and English-German. For English-Chinese, we use the dataset of WMT17 as the training corpus, consisting of 20.6M sentence pairs. For English-German, the training data is from WMT20, containing 41.0M sentence pairs. We provide more details of data preprocessing in Appendix. Following recent studies on lexically constrained translation (Chen et al., 2021; Wang et al., 2022), we evaluate our method on human-annotated alignment test sets. For English-Chinese, both the validation and test sets are from Liu et al. (2005). For English-German, the test set is from Zenkel et al. (2020). We use newstest2013 as the validation set, whose word alignment is annotated by fast-align2. The training sets are filtered to exclude test and validation sentences. + +Lexical Constraints Following some recent works (Song et al., 2019; Chen et al., 2020, 2021; Wang et al., 2022), we simulate real-world lexically constrained translation scenarios by sampling constraints from the phrase table that are extracted from parallel sentence pairs based on word alignment. The script used to create the constraints is publicly available.3 Specifically, the number of constraints for each sentence pair ranges between 0 and 3, and the length of each constraint ranges between 1 and 3 tokens. We use fast-align to build the alignment of the training data. + +Model Configuration We adopt Transformer (Vaswani et al., 2017) as our NMT model, which is optimized by Adam (Kingma and Ba, 2015) with $\beta_{1} = 0.9$ , $\beta_{2} = 0.98$ and $\epsilon = 10^{-9}$ . Please refer to Appendix for more details on the model configuration and the training process. + +Baselines We compare our approach with the following six representative baselines: + +- **Placeholder (Crego et al., 2016): replacing constrained terms with placeholders;** +VDBA (Hu et al., 2019): modifying beam search to incorporate target-side constraints; +- Replace (Song et al., 2019): replacing source text with the corresponding target constraints; +- CDAAlign (Chen et al., 2021): inserting target constraints based on word alignment; +- AttnVector (Wang et al., 2022): using attention keys and values to model constraints; +- TextInfill (Xiao et al., 2022): filling free tokens through a bilingual text-infilling task. + +Evaluation Metrics We follow Alam et al. (2021a) to use the following four metrics to make a thorough comparison of the involved methods: + +- BLEU(Papineni et al.,2001):measuring the translation quality of the whole sentence; +- Exact Match: indicating the accuracy that the source constraints in the input sentences are translated into the provided target constraints; +- Window Overlap: quantifying the overlap ratio between the hypothesis and the reference windows for each matched target constraint, indicating if this constraint is placed in a suitable context. The window size is set to 2. +- 1-TERm: modifying TER (Snover et al., 2006) by setting the edit cost of constrained tokens to 2 and the cost of free tokens to 1. + +We use sacreBLEU $^4$ (Post, 2018) to estimate the BLEU score, and adapt the scripts released by Alam et al. (2021a) for the other three metrics. + +# 4.2 Main Results + +Template Accuracy We firstly examine the performance of the model in the template generation sub-task before investigating the translation performance. We compare the target-side template extracted from the reference sentence and the one generated by the model to calculate the accuracy of template generation. Formally, if the reference template $\mathbf{t}$ is $\mathrm{Y_0C_{i_1}Y_1\cdots C_{i_N}Y_N}$ , the generated template $\hat{\mathbf{t}}$ is correct if + +$\hat{\mathbf{t}} = \mathrm{Y}_0\mathrm{C}_{j_1}\mathrm{Y}_1\dots \mathrm{C}_{j_N}\mathrm{Y}_N;$ +- the set $\{j_1, \dots, j_N\}$ equals $\{i_1, \dots, i_N\}$ . + +In other words, the model must generate all the nonterminals to guarantee the presence of the provided constraints. However, the order of constraint-related nonterminals can be flexible since there often exist various suitable orders for the provided constraints. In both English-Chinese and English-German, the template accuracy of our model is $100\%$ . An interesting finding is that our model learns to reorder the constraints according to the style of the target language. We provide an example of constraint reordering in Table 1. + +When generating the free token derivation $\mathbf{f}$ , the model can recall all the nonterminals (i.e., $\mathrm{Y}_n$ ) presented in the template $\mathbf{t}$ in English-Chinese. In English-German, however, the model omits one free token nonterminal, of which the frequency is $0.2\%$ . We use empty strings for the omitted nonterminals when reconstructing the output sentence. + +Translation Performance Table 2 shows the results of lexically constrained translation, demonstrating that all the investigated methods can recall more provided constraints than the unconstrained Transformer model. Our approach can improve the BLEU score over the involved baselines. This improvement potentially comes from two aspects: (1) our system outputs can match more pre-specified constraints compared to some baselines, such as AttnVector (Wang et al., 2022) (100% vs. 93.8%); (2) our method can place more constraints in appropriate context, which can be measured by window overlap. The exact match accuracy of VDBA (Hu et al., 2019) is lower than 100% due to the out-of-vocabulary problem in English-Chinese. + +TextInfill (Xiao et al., 2022) and our approach can achieve $100\%$ exact match accuracy in both the two language pairs. However, TextInfill can only place the constraints in the pre-specified order, + +
Constraints<slowing down,减弱>; <price hike,价格上涨>
SourceAnalysts are concerned that since there is no sign yet of any slowing down of this price hike, the prospect of the British real estate market as where it is heading now is far from optimistic.
Reference分析家担心,由于目前还看不见价格上涨趋势有减弱的迹象,照此发展下去,英国房地产市场前景堪忧。
Input (enc)C1 slowing down C2 price hike <sep> X0 C1 X1 C2 X2 <sep> X0 Analysts are concerned that since there is no sign yet of any X1 of this X2, the prospect of the British real estate market as where it is heading now is far from optimistic.
Prefix (dec)C1 减弱 C2 价格上涨 <sep>
OutputY0 C2 Y1 C1 Y2 <sep> Y0 分析师们担心,由于目前还没有迹象显示 Y1 会 Y2,英国房地产市场的前景远不乐观。
Result分析师们担心,由于目前还没有迹象显示价格上涨会减弱,英国房地产市场的前景远不乐观。
+ +Table 1: An example of our method. We replace the nonterminals in the template using the derivation rules to reconstruct the final result (i.e., "Result"). Surprisingly, we find that our model can automatically sort the provided constraints when generating the template. In this example, $C_1$ is before $C_2$ in the source-side template. But in the target-side template generated by our model, $C_2$ is before $C_1$ , which is more suitable for the target language. + +
MethodBLEUExact MatchWindow Overlap1-TERmBLEUExact MatchWindow Overlap1-TERm
DirectionEnglish-ChineseEnglish-German
Vanilla42.710.14.835.724.810.08.139.2
Placeholder46.699.433.941.527.2100.029.444.6
VDBA45.899.633.441.729.0100.031.145.1
Replace46.493.835.540.731.196.635.748.3
CDAlign46.292.131.741.629.795.932.346.3
AttnVector46.993.835.842.431.397.537.247.9
TextInfill45.6100.032.839.930.7100.035.547.1
Ours47.5100.036.943.132.3100.038.549.8
+ +while our approach can automatically reorder the constraints. As a result, the window overlap score of our approach is higher than TextInfill. Please refer to Table 8 in Appendix for more translation examples of both our method and some baselines + +# 4.3 Unconstrained Translation + +A concern for lexically constrained translation methods is that they may cause poor translation quality in unconstrained translation scenarios. We thus evaluate our approach in the standard translation task, where the model is only provided with the source sentence $\mathbf{x}$ . Under this circumstance, the input and output can be given by + +$$ +\begin{array}{l} \mathbf {x} ^ {\prime} = \phi < \operatorname {s e p} > \mathrm {X} _ {0} < \operatorname {s e p} > \mathrm {X} _ {0} \mathbf {x}, \\ \mathbf {y} ^ {\prime} = \phi < \operatorname {s e p} > \mathrm {Y} _ {0} < \operatorname {s e p} > \mathrm {Y} _ {0} \mathbf {y}, \tag {15} \\ \end{array} +$$ + +respectively. The BLEU scores of our method are 42.6 and 25.0 for English-Chinese and English-German, respectively. The performance of our + +method is comparable with the vanilla model, which can dispel the concern that our approach may worsen the unconstrained translation quality. + +# 4.4 Inference Speed + +Table 2: Results of the lexically constrained translation task for both English-Chinese and English-German. For clarity, we highlight the highest score in bold and the second-highest score with underlines. + +
MethodsSpeed
Vanilla3392 tokens per second
Ours3390 tokens per second
+ +Table 3: Inference speed of our method and the vanilla model on the English-Chinese validation set. + +Table 3 shows the decoding speed. Since we did not change the model architecture and the decoding algorithm, the speed of our method is close to the vanilla Transformer model (Vaswani et al., 2017). Although our speed is almost the same as the vanilla model, our inference time is a bit longer, given the fact that the output sequence $\mathbf{y}^{\prime}$ is longer than the original target-language sentence $\mathbf{y}$ . + +
MethodBLEUStructure AccuracyBLEUStructure Accuracy
CorrectMatchCorrectMatch
DirectionEnglish-FrenchEnglish-Russian
Remove31.4n/an/a21.0n/an/a
Split-Inject66.1100.00100.0043.1100.0099.85
XML65.399.5599.3044.999.4598.90
Ours67.3100.00100.0045.8100.0099.80
DirectionEnglish-ChineseEnglish-German
Remove31.5n/an/a25.7n/an/a
Split-Inject57.0100.0099.3050.7100.0099.80
XML61.299.8599.7552.799.8099.20
Ours61.5100.0099.8053.6100.0099.80
+ +Table 4: Results of the structurally constrained translation task. We highlight the highest score in bold and the second-highest score with underlines. + +# 4.5 Effect of Data Scale + +![](images/56c1fbabb6c36439c9f530ff8ed86afb09b8bca4559767e241f06fd0f1f0ef4d.jpg) + +![](images/20d9173bc42b374bbf7907b129d2e08a75cd08ec0f5915b1d0d7fa2dccf14ba5.jpg) + +![](images/9c5f24ff50baaf6922040c35dd29dc3f4d148cf90c11758855244ec39697b069.jpg) + +![](images/56a1ec2519e6323d3c8e95596bc35a2436a557607f6a5203f4e4a644210c404d.jpg) +Figure 4: Effect of data scale. The results are reported on the English-Chinese validation set. + +We vary the amounts of training data to investigate the effect of data scale on our approach. Figure 4 shows the results. The BLEU score increases with the data size, while the window overlap score reaches the highest value when using 10.0M training examples. When using all the training data, the 1 - TERm metric achieves the best value. We find that the exact match accuracy of our method is maintained at $100\%$ , even with only 0.6M training examples. This trend implies that our method can be applied in some low-resource scenarios. + +# 4.6 More Analysis + +Due to space limitation, we place a more detailed analysis of our approach in Appendix, including the effect of the alignment model, the performance on more language pairs, and the domain robustness of our model, which is evaluated on the WMT21 terminology translation task (Alam et al., 2021b) that lies in the COVID-19 domain. + +# 5 Structurally Constrained Translation + +# 5.1 Setup + +Data We conduct our experiments on the dataset released by Hashimoto et al. (2019), which supports the translation from English to seven other languages. We select four languages, including French, Russian, Chinese, and German. For each language pair, the training set contains roughly 100K sentence pairs. We report the results on the validation sets since the test sets are not open-sourced. We follow Hashimoto et al. (2019) to use SentencePiece5 to preprocess the data, which supports user-defined special symbols. The model type of SentencePiece is set to unigram, and the vocabulary size is set to 9000. For English-Chinese, we over-sample the English sentences when learning the joint tokenizer, since Chinese has more unique characters than English (Hashimoto et al., 2019). We did not perform over-sampling for other language pairs. We register the XML tags and URL placeholders as user-defined special symbols. In addition, we also register & , < , and > as special tokens, following Hashimoto et al. (2019). + +Model Configuration Since the data scale for structurally constrained translation is much smaller than lexically constrained translation, we follow Hashimoto et al. (2019) to set the width of the model to 256 and the depth of the model to 6. See Section B.1 in Appendix for more details. + +Baselines We compare our approach with the following three baselines: + +- Remove: removing the markup tags and only translating the plain text; +- Split-Inject (Al-Anzi et al., 1997): splitting the input sentence based on the markup tags and then translating each text fragment independently, and finally injecting the tags; +- XML (Hashimoto et al., 2019): directly learning the NMT model end-to-end using parallel sentences with XML tags. + +Evaluation Metrics We follow Hashimoto et al. (2019) to use the following metrics: + +- BLEU: considering the structure when estimating BLEU score (Papineni et al., 2001); +- Structure Accuracy: utilizing the etree package to check if the system output is a valid XML structure (i.e., Correct), and if the output structure exactly matches the structure of the given reference (i.e., Match). + +All the metrics are calculated using the evaluation script released by Hashimoto et al. (2019). + +# 5.2 Main Results + +Template Accuracy We firstly examine the accuracy of the generated templates. A generated template is correct if + +- the template is a valid XML structure; +- the template recalls all the markup tags of the input sentence. + +The template accuracy of our method is $100\%$ in all the four language pairs. Similar to lexically constrained translation, the model may omit some free token nonterminals (i.e., $\mathrm{Y}_n$ ) when generating the derivation $\mathbf{f}$ , of which the ratios are $0.4\%$ , $0.6\%$ , $0.1\%$ , $0.9\%$ in English-French, English-Russian, English-Chinese, English-German, respectively. We use empty strings for the omitted nonterminals when reconstructing the output sentence. + +Translation Performance Table 4 shows the results of all the involved methods. Our approach can improve the BLEU score over the three baselines, and the structure correctness is $100\%$ . Although Split-Inject can also guarantee the correctness of the output, its BLEU score is much lower, which is potentially caused by the reason that some fragments are translated without essential context. The structure match accuracy with respect to the given reference is not necessarily $100\%$ , since the order of markup tags can be diverse due to the variety of natural language. See Table 9 in Appendix for some translation examples. + +# 6 Conclusion + +In this work, we propose a template-based framework for constrained translation and apply the framework to two specific tasks, which are lexically and structurally constrained translation. Our motivation is to decompose the generation of the whole sequence into the arrangement of constraints and the generation of free tokens, which can be learned through a sequence-to-sequence framework. Experiments demonstrate that the proposed method can achieve high translation quality and match accuracy simultaneously and our inference speed is comparable with unconstrained NMT baselines. + +# Limitations + +A limitation of this work is that our method cannot cope with one-to-many constraints (e.g., ). Moreover, we only validate the proposed template-based framework in machine translation tasks. However, constrained sequence generation is vital in many other NLP tasks, such as table-to-text generation (Parikh et al., 2020), text summarization (Liu et al., 2018), and text generation (Dathathri et al., 2020). In the future, we will apply the proposed method to more constrained sequence generation tasks. + +# Acknowledgments + +This work was supported by the National Natural Science Foundation of China (No. 61925601, No. 62006138), the National Social Science Fund of China (No. 62236011), Beijing Academy of Artificial Intelligence (BAAI), a grant from the Guoqiang Institute, Tsinghua University, and the Tencent AI Lab Rhino-Bird Focused Research Program (No. JR202031). We thank all the reviewers for their valuable and insightful comments. + +# References + +F Al-Anzi, K Al-Zame, M Husain, and H Al-Mutairi. 1997. Automatic english/arabic html home page translation tool. In Proc. 1st Workshop Technol. Arabizing Internet. +Md Mahfuz Ibn Alam, Antonios Anastasopoulos, Laurent Besacier, James Cross, Matthias Gallé, Philipp Koehn, and Vassilina Nikoulina. 2021a. On the evaluation of machine translation for terminology consistency. CoRR, abs/2106.11891. +Md Mahfuz Ibn Alam, Ivana Kvapilíková, Antonios Anastasopoulos, Laurent Besacier, Georgiana Dinu, Marcello Federico, Matthias Galle, Kweonwoo Jung, Philipp Koehn, and Vassilina Nikoulina. 2021b. Findings of the WMT shared task on machine translation using terminologies. In Proceedings of the Sixth Conference on Machine Translation, pages 652-663. +Toms Bergmanis and Marcis Pinnis. 2021. Facilitating terminology translation with target lemma annotations. In Proceedings of EACL 2021. +Guanhua Chen, Yun Chen, and Victor O.K. Li. 2021. Lexically constrained neural machine translation with explicit alignment guidance. In Proceedings of AAAI 2021. +Guanhua Chen, Yun Chen, Yong Wang, and Victor O.K. Li. 2020. Lexical-constraint-aware neural machine translation via data augmentation. In Proceedings of IJCAI 2020. +Josep Maria Crego, Jungi Kim, Guillaume Klein, Anabel Rebollo, Kathy Yang, Jean Senellart, Egor Akhanov, Patrice Brunelle, Aurélien Coquard, Yongchao Deng, Satoshi Enoue, Chiyo Geiss, Joshua Johanson, Ardas Khalsa, Raoum Khiari, Byeongil Ko, Catherine Kobus, Jean Lorieux, Leidiana Martins, Dang-Chuan Nguyen, Alexandra Priori, Thomas Riccardi, Natalia Segal, Christophe Servan, Cyril Tiquet, Bo Wang, Jin Yang, Dakun Zhang, Jing Zhou, and Peter Zoldan. 2016. Systran's pure neural machine translation systems. CoRR, abs/1610.05540. +Sumanth Dathathri, Andrea Madotto, Janice Lan, Jane Hung, Eric Frank, Piero Molino, Jason Yosinski, and Rosanne Liu. 2020. Plug and play language models: A simple approach to controlled text generation. In Proceedings of ICLR 2020. +Georgiana Dinu, Prashant Mathur, Marcello Federico, and Yaser Al-Onaizan. 2019. Training neural machine translation to apply terminology constraints. In Proceedings of ACL 2019. +Jiatao Gu, Changhan Wang, and Junbo Zhao. 2019. Levenshtein transformer. In Proceedings of NeurIPS 2019. +Greg Hanneman and Georgiana Dinu. 2020. How should markup tags be translated? In Proceedings of the Fifth Conference on Machine Translation. + +Kazuma Hashimoto, Raffaella Buschiazzo, James Bradbury, Teresa Marshall, Richard Socher, and Caiming Xiong. 2019. A high-quality multilingual dataset for structured documentation translation. In Proceedings of the Fourth Conference on Machine Translation. +Eva Hasler, Adrià de Gispert, Gonzalo Iglesias, and Bill Byrne. 2018. Neural machine translation decoding with terminology constraints. In Proceedings of NAACL 2018. +Chris Hokamp and Qun Liu. 2017. Lexically constrained decoding for sequence generation using grid beam search. In Proceedings of ACL 2017. +J. Edward Hu, Huda Khayrallah, Ryan Culkin, Patrick Xia, Tongfei Chen, Matt Post, and Benjamin Van Durme. 2019. Improved lexically constrained decoding for translation and monolingual rewriting. In Proceedings of NAACL 2019. +Eric Joanis, Darlene Stewart, Samuel Larkin, and Roland Kuhn. 2013. Transferring markup tags in statistical machine translation: a two-stream approach. In Proceedings of the 2nd Workshop on Post-editing Technology and Practice. +Josef Jon, João Paulo Aires, Dusan Varis, and Ondrej Bojar. 2021. End-to-end lexically constrained machine translation for morphologically rich languages. In Proceedings of ACL-IJCNLP 2021. +Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. CoRR, abs/1412.6980. +Philipp Koehn. 2009. A process study of computer-aided translation. Machine Translation, 23(4):241-263. +Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of ACL 2007. +Huayang Li, Guoping Huang, Deng Cai, and Lemao Liu. 2020. Neural machine translation with noisy lexical constraints. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 28:1864-1874. +Linqing Liu, Yao Lu, Min Yang, Qiang Qu, Jia Zhu, and Hongyan Li. 2018. Generative adversarial network for abstractive text summarization. In Proceedings of AAAI 2018. +Yang Liu, Qun Liu, and Shouxun Lin. 2005. Log-linear models for word alignment. In Proceedings of ACL 2005. +Elise Michon, Josep Crego, and Jean Senellart. 2020. Integrating domain terminology into neural machine translation. In Proceedings of COLING 2020. + +Amit Moryossef, Yoav Goldberg, and Ido Dagan. 2019. Step-by-step: Separating planning from realization in neural data-to-text generation. In Proceedings of NAACL 2019. +Mathias Müller, Annette Rios, and Rico Sennrich. 2020. Domain robustness in neural machine translation. In Proceedings of AMTA 2020. +Jan Niehues. 2021. Continuous learning in neural machine translation using bilingual dictionaries. In Proceedings of EACL 2021. +Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2001. Bleu: a method for automatic evaluation of machine translation. In Proceedings of ACL 2001. +Ankur Parikh, Xuezhi Wang, Sebastian Gehrmann, Manaal Faruqui, Bhuwan Dhingra, Diyi Yang, and Dipanjan Das. 2020. ToTTo: A controlled table-to-text generation dataset. In Proceedings of EMNLP 2020. +Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers. +Matt Post and David Vilar. 2018. Fast lexically constrained decoding with dynamic beam allocation for neural machine translation. In Proceedings of NAACL 2018. +Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of ACL 2016. +Matthew Snover, Bonnie Dorr, Rich Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of AMTA 2006, pages 223-231. +Kai Song, Yue Zhang, Heng Yu, Weihua Luo, Kun Wang, and Min Zhang. 2019. Code-switching for enhancing NMT with pre-specified translation. In Proceedings of NAACL 2019. +Raymond Hendy Susanto, Shamil Chollampatt, and Lil ing Tan. 2020. Lexically constrained neural machine translation with Levenshtein transformer. In Proceedings of ACL 2020. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of NeurIPS 2017. +Shuo Wang, Zhixing Tan, and Yang Liu. 2022. Integrating vectorized lexical constraints for neural machine translation. In Proceedings of ACL 2022. +Felix Wu, Angela Fan, Alexei Baevski, Yann Dauphin, and Michael Auli. 2019. Pay less attention with lightweight and dynamic convolutions. In Proceedings of ICLR 2019. + +Yanling Xiao, Lemao Liu, Guoping Huang, Qu Cui, Shujian Huang, Shuming Shi, and Jiajun Chen. 2022. Bitiimt: A bilingual text infilling method for interactive machine translation. In Proceedings of ACL 2022. +Thomas Zenkel, Joern Wuebker, and John DeNero. 2020. End-to-end neural word alignment outperforms GIZA++. In Proceedings of ACL 2020. +Hao Zhang, Richard Sproat, Axel H. Ng, Felix Stahlberg, Xiaochang Peng, Kyle Gorman, and Brian Roark. 2019. Neural models of text normalization for speech applications. Computational Linguistics, 45(2). +Jiacheng Zhang, Huanbo Luan, Maosong Sun, Feifei Zhai, Jingfang Xu, and Yang Liu. 2021. Neural machine translation with explicit phrase alignment. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 29:1001-1010. + +# A Supplementary Material for Lexically Constrained Translation + +# A.1 More Details on Data + +For the lexically constrained translation task, Chinese sentences are segmented by $\text{Jieba}^6$ , while English and German sentences are tokenized using Moses (Koehn et al., 2007). The tokenized sentences are then processed by BPE (Sennrich et al., 2016) with $32\mathrm{K}$ merge operations for both the two language pairs. We detokenize the model outputs before calculating the sacreBLEU. + +# A.2 More Details on Model + +We adopt Transformer (Vaswani et al., 2017) as our NMT model. For English-Chinese, we use the base model, whose depth is 6, and the width is 512. For English-German, we use the big model, whose depth is 6, and the width is 1024. The base and big models are optimized using the corresponding learning schedules introduced in Vaswani et al. (2017). We train base models for 200K iterations using 4 NVIDIA V100 GPUs and train big models for 300K iterations using 8 NVIDIA V100 GPUs. Each mini-batch contains approximately 32K tokens in total. All the models are optimized using Adam (Kingma and Ba, 2015), with $\beta_{1} = 0.9$ , $\beta_{2} = 0.98$ and $\epsilon = 10^{-9}$ . In all experiments, both the dropout rate and the label smoothing penalty are set to 0.1. The beam size is set to 4. + +# A.3 Effect of Alignment Model + +In this work, we use an alignment model to produce word alignments for the training set, which is then used for phrase table extraction. By default, we use all the parallel data in the training set to train the alignment model, using the fast-align toolkit. To better understand the effect of the alignment model, we replace the default alignment model with a weaker one that is trained using only 0.1M sentence pairs. Table 5 shows the result, from which we find that using the weaker word alignment can negatively affect the BLEU score. However, the exact match accuracy is still $100\%$ , and changes in the other two metrics are modest. + +# A.4 Domain Robustness + +Domain robustness is about the generalization of machine learning models to unseen test domains (Müller et al., 2020). In our experiments, + +
# Sent.BLEUExact MatchWindow Overlap1-TERm
0.1M37.5100.032.737.5
20.6M38.2100.032.937.6
+ +Table 5: Effect of the alignment model on the English-Chinese validation set. "# Sent." means the number of sentence pairs used to train the alignment model. + +
MethodBLEUExa. Mat.Win. Ove.1 - T.m
Vanilla37.758.119.437.9
Placeholder38.598.924.438.8
VDBA38.0100.024.339.1
Replace38.487.324.539.7
CDAlign38.689.324.040.5
TextInfill38.797.023.238.4
Ours39.6100.026.341.3
+ +Table 6: Results on the English-Chinese test set of the WMT21 terminology translation. + +all the involved models are trained in the news domain. We evaluate the domain robustness of these methods on the WMT21 terminology translation task (Alam et al., 2021b)7, which lies in the COVID-19 domain. Since this task does not support English-German translation, we only conduct this experiment on English-Chinese. In this test set, the maximum number of constraints is 12. We thus modify the phrase extraction script to increase the maximum number of constraints from 3 to 12, and then re-train both the baselines and our models. Note that we only change the number of constraints, while the training domain is still news. Since the open-sourced implementation of AttnVector (Wang et al., 2022)8 does not support more than 3 constraints, we omit this baseline in this experiment. The test set of the WMT21 terminology translation task also contains some constraints that consist of more than one target term (i.e., one-to-many constraints). We only select the one that appear in the reference as our constraint. We leave it to future work to extend the current framework for one-to-many constraints. + +Table 6 provides the results on the COVID-19 domain, where our approach performs best across all the four evaluation metrics. VDBA (Hu et al., 2019) and our method can both maintain the exact match accuracy, while the other three baselines + +
MethodBLEUExa. Mat.Win. Ove.1 - T.m
Chinese-English
Vanilla23.317.610.436.6
AttnVector25.995.535.542.1
TextInfill25.0100.033.339.0
Ours26.7100.037.345.1
German-English
Vanilla32.49.57.345.8
AttnVector37.891.436.453.3
TextInfill37.2100.037.151.4
Ours38.8100.039.753.4
+ +Table 7: Results of the lexically constrained translation task in Chinese-English and German-English. + +achieve much lower exact match accuracy due to the domain shift. However, the BLEU score of VDBA is lower than other constrained translation approaches, while our method can also achieve the best BLEU score. The exact match accuracy of TextInfill (Xiao et al., 2022) is lower than $100\%$ because sometimes the model can not generate all the slots within the length limitation. The results indicate that our approach can better cope with constraints coming from unseen domains. + +# A.5 X-English Translation + +We also conduct experiments on X-English translation directions (i.e., Chinese-English and German-English). Due to the limitation of computational resources, we only train the two most recent baselines: AttnVector (Wang et al., 2022) and TextInfill (Xiao et al., 2022). Moreover, AttnVector and TextInfill achieve the best BLEU score and exact match accuracy, excluding our approach, respectively. As shown in Table 7, we find that our approach performs well in both Chinese-English and German-English, achieving $100\%$ exact match accuracy and a better BLEU score. + +# A.6 Case Study + +As mentioned in Section 4.2, our approach outperforms the baselines in the lexically constrained translation task. To better understand the difference between our approach and some representative baselines, we list some examples in Table 8. + +# B Supplementary Material for Structurally Constrained Translation + +# B.1 More Details on Model + +All the models are trained for 40K iterations in all the four translation directions. We adopt the cosine learning rate schedule presented in Wu et al. (2019), but we set the maximum learning rate to $7 \times 10^{-4}$ and the warmup step to 8K. The period of the cosine function is set to 32K, which means that the learning rate decays into the minimum value at the end of the training. Both the dropout rate and the label smoothing penalty are set to 0.2. Each mini-batch consists of approximately 32k tokens in total. We use Adam (Kingma and Ba, 2015) for model optimization, with $\beta_{1} = 0.9$ , $\beta_{2} = 0.98$ and $\epsilon = 10^{-9}$ . We also set the weight decay coefficient to $10^{-3}$ . Both the baseline models and our models are trained using the same hyperparameters. + +# B.2 Case Study + +We list some translation examples in Table 9 to provide a detailed understanding of our work. The examples demonstrate that our approach can effectively cope with structured inputs. + +
Constraints<guests,来宾>; <culinary culture, 食品文化>; <Chinese-style,中式>
SourceWang Kaiwen, Chinese ambassador to Latvia, introduced to the guests a few major styles of cooking in Chinese gourmet foods and expressed his hope that through tasting Chinese-style gourmet foods more will be learned about China and Chinese culinary culture.
Reference中国驻拉脱维亚大使王开文向来宾们介绍了中国美食的几大菜系,表示希望通过品尝中式美味食品更多了解中国和中国 食品文化。
AttnVector中国驻拉托维亚大使王开文向来宾介绍了中国美食食品的几种主要烹饪方式,并表示希望通过品尝中式 美食,更多地了解中国和中国的文化。
TextInfill中国驻拉脱维亚大使王开文向来宾介绍了几种主要的中国美食 食品文化,并表示希望通过品尝中式 美食,能够了解更多关于中国和中国烹饪文化的知识。
Ours中国驻拉脱维亚大使王开文向来宾介绍了中国美食的几种主要烹饪风格,并表示希望通过品尝中式 美食,更多地了解中国和中国的 食品文化。
Constraints<Italian engineer, 义大利工程师>; <Gidzenko, 吉曾柯>; <Shuttleworth, 夏特沃斯>
SourceReturning together with Shuttleworth to earth are the Russian spacecraft commander Gidzenko and the Italian engineer Vittori who entered space with him.
Reference与夏特沃斯一同返回地球的,是这次和他一起进入太空的俄罗斯太空船指挥官 吉曾柯与 义大利工程师 维托利。
AttnVector吉曾柯和 义大利工程师 维托利与 夏特沃斯一同返回地球,他们一同进入太空。
TextInfill俄罗斯太空船指挥官吉登科(Gidzenko)和 义大利工程师 吉曾柯(Vittori)与 夏特沃斯一起重返地球。
Ours与夏特沃斯一起返回地球的是俄罗斯航天器指挥官 吉曾柯和与他一同进入太空的 义大利工程师 维托里。
+ +Table 8: Examples for lexically constrained translation. For clarity, we only list the results of two representative baselines. We choose AttnVector (Wang et al., 2022) and TextInfill (Xiao et al., 2022) since they achieve the best BLEU score and the highest exact match accuracy, respectively, excluding our approach. In the first example, AttnVector omits the target constraint 食品文化 in its output, while both TextInfill and our approach can generate all the three constraints. In the second example, TextInfill places the constraint 吉曾柯 in the wrong context, while our approach outputs a better result. + +
Source... <ph> Each dashboard can have up to <ph> 3 </ph> filters. Contact <ph> Salesforce </ph> to increase the filter options limit in <ph> Salesforce Classic </ph>. A maximum of <ph> 50 </ph> filter options is possible. </ph>
Reference... <ph> Chaque tableau de bord peut inclure jusqu'à <ph> 3 </ph> filtres. Pour augmenter les limitations des options de filtrage dans <ph> Salesforce Classic </ph>, contactez <ph> Salesforce </ph> . <ph> 50 </ph> options défiltre sont possibles au maximum. </ph>
Split-Inject... <ph> Chaque tableau de bord peut avoir jusqu'à <ph> 3 </ph> filtres. Contact <ph> Salesforce </ph> pour accroirtre la limitation des options de filtrage <ph> Salesforce Classic </ph>. maximum d'un maximum <ph> 50 </ph> Les options de filtrage sont possibles. </ph> L
XML... <ph> Chaque tableau de bord peut avoir jusqu'à <ph> 3 </ph> filtres. Pour augmenter la limitation en options de filtrage dans <ph> Salesforce Classic </ph>, chaque filtre peut inclure jusqu'à <ph> 50 </ph> options de filtrage. </ph>
Ours... <ph> Chaque tableau de bord peut avoir jusqu'à <ph> 3 </ph> filtres. Contactez <ph> Salesforce </ph> pour augmenter les options de limitation de filtrage dans <ph> Salesforce Classic </ph>. Un maximum de <ph> 50 </ph> options de filtrage est possible. </ph>
SourceEach <ph> Event Monitoring app </ph> user needs an <ph> Event Monitoring Analytics Apps </ph> permission set license. The <ph> Event Monitoring Analytics Apps </ph> permission set license enables the following permissions.
ReferenceChaque utiliser de l' application Event Monitoring </ph> doit-disposer d'une licence d'ensemble d'autorisations <ph> Event Monitoring Analytics Apps </ph> . La licence d'ensemble d'autorisations <ph> Event Monitoring Analytics Apps </ph> accorde les autorisations ci-dessous.
Split-InjectChaque <ph> Application Event Monitoring </ph> utiliser doit avoir un utiliser <ph> Applications Event Monitoring Analytics </ph> Licence d'ensemble d'autorisations. <ph> Applications Event Monitoring Analytics </ph> La licence d'ensemble d'autorisations active les autorisations ci-dessous.
XMLChaque utiliser de l' application Event Monitoring </ph> doit-disposer d'une licence d'ensemble d'autorisations <ph> Event Monitoring Analytics Apps </ph> . La licence d'ensemble d'autorisations <ph> Event Monitoring Analytics Apps </ph> active les autorisations ci-dessous.
OursChaque utiliser de l' application Event Monitoring </ph> doit-disposer d'une licence d'ensemble d'autorisations <ph> Event Monitoring Analytics Apps </ph> . La licence d'ensemble d'autorisations <ph> Event Monitoring Analytics Apps </ph> active les autorisations suivantes.
+ +Table 9: Examples for structurally constrained translation. We only highlight some text fragments wrapped by markup tags to show the difference between the involved methods. In the first example, XML (Hashimoto et al., 2019) omits the fragment Salesforce , while Split-Inject and our method recall all the markup tags of the source sentence. In the second example, the colored contents are mistranslated by Split-Inject, which is potentially caused by the lack of context when translating these fragments. \ No newline at end of file diff --git a/atemplatebasedmethodforconstrainedneuralmachinetranslation/images.zip b/atemplatebasedmethodforconstrainedneuralmachinetranslation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..73b0d6ee885ddd523fef933d96029cbf82416ea1 --- /dev/null +++ b/atemplatebasedmethodforconstrainedneuralmachinetranslation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1491b9f75825afafb6e8ef68c9226aa82de5231199a280fc7f201f94cd042e94 +size 992323 diff --git a/atemplatebasedmethodforconstrainedneuralmachinetranslation/layout.json b/atemplatebasedmethodforconstrainedneuralmachinetranslation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..ff0f1c93ca5d599e3cc59edfff10cbbabc90a32e --- /dev/null +++ b/atemplatebasedmethodforconstrainedneuralmachinetranslation/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6052cf1d20d663b34f291b1a44c66b20218fa5aca5056b9d84874a8d19824c65 +size 524796 diff --git a/attemptparameterefficientmultitasktuningviaattentionalmixturesofsoftprompts/2416b46e-92d2-49df-89b1-a222dd7dd1a2_content_list.json b/attemptparameterefficientmultitasktuningviaattentionalmixturesofsoftprompts/2416b46e-92d2-49df-89b1-a222dd7dd1a2_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..d54d273600787031d45f7aa1a43c1d533395f496 --- /dev/null +++ b/attemptparameterefficientmultitasktuningviaattentionalmixturesofsoftprompts/2416b46e-92d2-49df-89b1-a222dd7dd1a2_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c80e4cf5bd03421b1615dc0be4f8020e548ff83e2163515ea0799221f87f3bef +size 124980 diff --git a/attemptparameterefficientmultitasktuningviaattentionalmixturesofsoftprompts/2416b46e-92d2-49df-89b1-a222dd7dd1a2_model.json b/attemptparameterefficientmultitasktuningviaattentionalmixturesofsoftprompts/2416b46e-92d2-49df-89b1-a222dd7dd1a2_model.json new file mode 100644 index 0000000000000000000000000000000000000000..98cbe2c60e7770718e0bddcf46ad5973d0875b74 --- /dev/null +++ b/attemptparameterefficientmultitasktuningviaattentionalmixturesofsoftprompts/2416b46e-92d2-49df-89b1-a222dd7dd1a2_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c63ff29037c5e8c6176ef3487c51b77394934d9f37d3c13dd7e1f1aad613e8e7 +size 154885 diff --git a/attemptparameterefficientmultitasktuningviaattentionalmixturesofsoftprompts/2416b46e-92d2-49df-89b1-a222dd7dd1a2_origin.pdf b/attemptparameterefficientmultitasktuningviaattentionalmixturesofsoftprompts/2416b46e-92d2-49df-89b1-a222dd7dd1a2_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..a44162590f57bf561ba7155135dcd77bb1403fcd --- /dev/null +++ b/attemptparameterefficientmultitasktuningviaattentionalmixturesofsoftprompts/2416b46e-92d2-49df-89b1-a222dd7dd1a2_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:893fa47e0190af6e093f322607a1e86934818873f920a4f12a1ea5d2ce4b354f +size 793230 diff --git a/attemptparameterefficientmultitasktuningviaattentionalmixturesofsoftprompts/full.md b/attemptparameterefficientmultitasktuningviaattentionalmixturesofsoftprompts/full.md new file mode 100644 index 0000000000000000000000000000000000000000..c321111d86cc8b34c48bd5beecd6eab173a35621 --- /dev/null +++ b/attemptparameterefficientmultitasktuningviaattentionalmixturesofsoftprompts/full.md @@ -0,0 +1,511 @@ +# ATTEMPT: Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts + +Akari Asai $^{\diamond}$ Mohammadreza Salehi $^{\diamond}$ Matthew E. Peters $^{\diamond}$ Hannaneh Hajishirzi $^{\diamond}$ + +$\diamond$ University of Washington $\diamond$ Allen Institute for AI {akari, mrsalehi, hannaneh}@cs.washington.edu matthewp@allenai.org + +# Abstract + +This work introduces a new multi-task, parameter-efficient language model (LM) tuning method that learns to transfer knowledge across different tasks via a mixture of soft prompts—small prefix embedding vectors pretrained for different tasks. Our method, called ATTEMPT (ATTEntional Mixtures of Prompt Tuning), obtains source prompts as encodings of large-scale source tasks into a small number of parameters and trains an attention module to interpolate the source prompts and a newly initialized target prompt for every instance in the target task. During training, only the target task prompt and the attention weights, which are shared between tasks in multi-task training, are updated, while the original LM and source prompts are intact. ATTEMPT is highly parameter-efficient (e.g., updates 2,300 times fewer parameters than full fine-tuning), while achieving high task performance using knowledge from high-resource tasks. Moreover, it is modular using pre-trained soft prompts and can flexibly add or remove source prompts for effective knowledge transfer. Our experimental results across 21 diverse NLP datasets show that ATTEMPT significantly outperforms prompt tuning and outperforms or matches fully fine-tuned or other parameter-efficient tuning approaches that use over ten times more parameters. Finally, ATTEMPT outperforms previous work in few-shot learning settings. $^{1}$ + +# 1 Introduction + +Fine-tuning all the parameters of large language models (LMs) given target task training data is the most common practice for optimizing task performance (Devlin et al., 2019; Raffel et al., 2020). A recent line of research introduces parameter-efficient tuning methods (Houlsby et al., 2019; Li and Liang, 2021; Ben Zaken et al., 2022) that only + +![](images/374c021f04152675a3b7e9c91ab493656b9eba20ca9b51a6f923c4c802e7e08d.jpg) +Figure 1: ATTEMPT combines multiple soft prompts trained on large-scale datasets (source prompts) to generate instance-wise prompts for a target task. At target task training, the LM and the source prompts are intact. + +update a small number of LM parameters; however, increasing efficiency often decreases the task performance (He et al., 2022). Moreover, these models are trained only using the task training data and do not benefit from large collection of other NLP tasks (Liu et al., 2019a). We posit that parameter-efficient tuning methods can leverage rich knowledge of high-resource tasks to improve both training efficiency and task performance. + +This work introduces a new parameter-efficient, modular multi-task tuning method called ATTEMPT (ATTEntional Mixtures of Prompt Tuning, previewed in Figure 1). ATTEMPT efficiently integrates knowledge from multiple tasks via a mixture of trainable soft prompts presupended to the input, keeping the original LM completely frozen. It first pre-trains transferable soft embeddings (Lester et al., 2021), called source prompts, on large-scale source tasks, which are likely to contain knowledge beneficial to other tasks. Then, ATTEMPT initializes a new target prompt for a given target task and learns an attention-weighted combination of source prompts and the target prompt. The attention module is a light-weight network that can be shared and trained simultaneously across tasks. + +ATTEMPT offers three key advantages over previous multi-task fine-tuning or parameter-efficient tuning methods: first, it is highly parameter-efficient and achieves competitive performance despite updating only $0.4\%$ of the parameters + +in full fine-tuning. Second, it enables modular multi-task learning using pre-trained soft prompts, where knowledge from different tasks can be flexibly combined, reused, or removed, and new tasks can be added to the lists of source or target tasks. Unlike prior work that relies on precomputed priors on which tasks are related, ATTEMPT learns to focus on useful tasks from many source tasks. Moreover, at inference, a single LM with multiple pre-loaded soft prompts can perform multiple tasks without parameter reloading. Lastly, it improves interpretability on underlying task similarities in multi-task learning by generating attention distributions. + +We conduct experiments on 21 datasets across diverse tasks, domains and output formats. ATTEMPT significantly outperforms previous prompt tuning-based approaches and matches state-of-the-art parameter-efficient transfer approaches or fully fine-tuned models that train orders of magnitude more parameters, especially on smaller datasets. ATTEMPT is also effective on few-shot domain adaptations (i.e., 4-32 shots). + +Our analysis further shows that ATTEMPT is particularly parameter-efficient and competitive with larger backbone LMs, where other parameter-efficient transfer approaches show rapid increases of the trainable parameters. Our ablation studies suggest that learned attentions, multi-task learning and modular transfer from multiple tasks largely contribute to the performance improvements. The attention distributions show the underlying similarities among seemingly different tasks (e.g., entailment and paraphrase detection), indicating signal for effective knowledge transfer across tasks. + +# 2 Background and Problem Setup + +We first enlist common paradigms in NLP for learning a target task, which differ in terms of available data and resources. We then describe our problem setup with respect to these paradigms. + +Fine-tuning. The most common practice in learning a new target task $T_{target}$ is to fine-tune all parameters of a pre-trained LM on the target task training data $\{(x, y)\}$ (e.g., Devlin et al. 2019). Formally, given pre-trained LM parameters $\theta$ , fine-tuning results in a specialized model $\theta_{target}$ by optimizing: $\max_{\theta_{target}} p_{\theta_{target}}(y \mid x)$ . + +Parameter-efficient tuning. To decrease training costs, parameter-efficient tuning updates a + +small number of parameters for the target task $\phi_{target}$ : $\max_{\phi_{target}} p_{\theta, \phi_{target}}(\boldsymbol{y} \mid \boldsymbol{x})$ , where the number of $\phi_{target}$ is much smaller than $\theta_{target}$ . Adapter (Houlsby et al., 2019) and its variants (Mahabadi et al., 2021a; Rücklé et al., 2021) insert trainable layers in the LMs for each task, and BitFit (Ben Zaken et al., 2022) directly updates LM biases only. Highly efficient prefix-tuning (Li and Liang, 2021) and prompt tuning (Lester et al., 2021) keep the original LM frozen and only update soft prompts prepended to the input. In-context learning (Brown et al., 2020) uses massive-scale LMs to learn new tasks from demonstrations (hard prompts) without any parameter update of $\theta$ , but often perform worse than the aforementioned methods with parameter updates (Liu et al., 2022). Given the rapidly increasing size of pre-trained LMs (Chowdhery et al., 2022; Brown et al., 2020), efficiently tuning to a new target task is desirable, but it often incurs a performance cost compared to the fine-tuning methods or shows sensitivity toward initialization (Li and Liang, 2021; Lester et al., 2021). SPoT (Vu et al., 2022) demonstrates that transferring prompts to another task enhances the performance at the cost of massive search. + +Multi-task transfer learning. Transfer learning methods attempt to learn a new target task given a collection of source tasks by updating the parameters of an LM, which has been proven effective in NLP (Khashabi et al., 2020; Raffel et al., 2020). Common approaches train on many different tasks (Liu et al., 2019a; Aribandi et al., 2022) or transfer a model fine-tuned on source tasks to another target task (Vu et al., 2020; Talmor and Berant, 2019). Several recent work introduce zero-shot or few-shot transfer of massive multi-task pretrained models (Sanh et al., 2022; Min et al., 2021; Wang et al., 2022a,b) via in-context learning, which does not require any parameter updates. However, those massive multi-task training approaches lack the flexibility of adding or removing source tasks even when some of the tasks cause negative interference between competing tasks (Zhang et al., 2020; Aghajanyan et al., 2021). + +Our problem setup. We combine parameter-efficient tuning and multi-task learning. Given a collection of source tasks $T_{1}, \ldots, T_{t}$ , our goal is to learn a new task $T_{target}$ by efficiently updating parameters $\phi_{target}$ given the target task data $\{(\boldsymbol{x}, \boldsymbol{y})\}$ , transferring knowledge from the source + +![](images/a24d7462635851f770f960fa9415b4471fed8a67fb86476d1b43aeb33e7ba376.jpg) +Figure 2: Overview of ATTEMPT. The parts framed in red are updated during training while other parts are intact. + +![](images/bf80efbc4c3a9b32989b03ae5cdbd6dcd2ab5a2017b9c4b676bb7cbf39aa61c5.jpg) + +tasks. Importantly, we do not know a priori which tasks provide useful inductive bias in the new target task (Ponti et al., 2022): seemingly different tasks can benefit from each other. + +# 3 Method + +ATTEMPT (depicted in Figure 2) leverages highly parameter-efficient prompt tuning (Lester et al., 2021) to obtain source prompts that encode knowledge from source tasks into a small number of parameters. It tunes instance-level prompts by integrating the source prompts and a target prompt newly initialized for a target task through an attention mechanism for every target task instance. + +ATTEMPT pre-trains a set of source prompts $\mathbf{P}_1, \ldots, \mathbf{P}_t$ for source tasks (Section 3.1; left side of Figure 2) and initializes a target prompt $\mathbf{P}_{target}$ for the target task. It then computes attentions between embedded input $\mathbf{X}$ and the soft prompts for each instance $(\boldsymbol{x}, \boldsymbol{y})$ using an attention module $\mathcal{G}$ (Section 3.2.1). Subsequently, ATTEMPT produces instance-wise prompt $\mathbf{P}_{instance}$ by interpolating the source prompts and the target-task prompt given the computed attentions (Section 3.2.2). $\mathbf{P}_{instance}$ is then prepended to the input to form the final input to a frozen LM $\theta$ . + +During training, ATTEMPT only updates the weights of $\mathbf{P}_{\text {target }}$ and $\mathcal{G}$ by maximizing the probability of generating $\pmb{y}$ given $\mathbf{P}_{\text {instance }}$ and $\pmb{x}$ . Importantly, it uses the unique characteristic of prompt or prefix tuning, where task-specific parameters $\phi_{\text {target }}$ for different tasks can be trained in the same minibatch (Lester et al., 2021; Li and Liang, 2021). Hence, it can train a shared attention $\mathcal{G}$ and multiple target task prompts simultaneously for further parameter and inference efficiency (Section 3.3). Finally, we discuss parameter efficiency of ATTEMPT in Section 3.4. + +# 3.1 Source Prompt Pre-training + +We first obtain source prompts $[\mathbf{P}_1, \ldots, \mathbf{P}_t]$ for $t$ high-resource datasets, such as Multi-NLI (Williams et al., 2018), SQuAD (Rajpurkar et al., 2016) through prompt tuning (Lester et al., 2021). Each source prompt is only trained once for a source task and can be transferred to different target tasks. Formally, for an input sequence $\mathbf{X}$ , a soft prompt is represented as $\mathbf{P} = [\mathbf{p}_1, \ldots, \mathbf{p}_m] \in \mathbb{R}^{m \times d}$ , where $m$ is the prompt length, and $d$ is the LM dimension. Input embeddings prepped by the prompt $[\mathbf{P}; \mathbf{X}]$ are fed into the frozen LM $\theta$ . During training, only prompt embeddings are updated by maximizing the likelihood of generating the target sequence $\mathbf{y}$ , as follows: + +$$ +\max _ {\mathbf {P}} p _ {\theta} (\boldsymbol {y} \mid [ \mathbf {P}; \mathbf {X} ]). \tag {1} +$$ + +# 3.2 Target Prompt Training + +After initializing a soft prompt for a new target task $\mathbf{P}_{\text {target }}(= \mathbf{P}_{t+1})$ , we learn instance-wise soft prompts $\mathbf{P}_{\text {instance }}$ for each instance in the target task by interpolating the source prompts and the target task prompt given attention scores generated by $\mathcal{G}$ . Similar to Eq. 1, we concatenate the produced instance-wise prompt to the input and train ATTEMPT by maximizing the likelihood: + +$$ +\max _ {\mathbf {P} _ {\text {t a r g e t}}, \mathcal {G}} p _ {\theta} (\boldsymbol {y} \mid [ \mathbf {P} _ {\text {i n s t a n c e}}; \mathbf {X} ]). \tag {2} +$$ + +During training, the new task prompt $\mathbf{P}_{\text {target }}$ and $\mathcal{G}$ are updated via $\mathbf{P}_{\text {instance }}$ , while source prompts and the original LM $\theta$ are untouched to preserve the knowledge learned from prior tasks or pretraining. + +# 3.2.1 Input-prompt Attentions + +ATTEMPT controls the influence of the set of source prompts on the instance-wise prompt by calculating input-prompt attentions. Specifically, an attention module $\mathcal{G}$ generates the attention weights + +$a_1, \ldots, a_{t+1}$ from input $\mathbf{X}$ to the prompts including both source prompts and the new target prompt. + +Since the input $\mathbf{X} \in \mathbb{R}^{l \times d}$ and a soft prompt $\mathbf{P}_j \in \mathbb{R}^{m \times d}$ have different sequence lengths, we first perform the max-pool operation for each dimension on $\mathbf{X}$ and each source prompt embedding and obtain $\hat{\mathbf{X}} \in \mathbb{R}^d$ and $\hat{\mathbf{P}}_j \in \mathbb{R}^d$ . We then feed $\hat{\mathbf{X}}$ to a sub-network $\mathcal{G}$ to project it into the prompt spaces. For efficiency, $\mathcal{G}$ consists of down and up projection layers, as follows: + +$$ +\begin{array}{l} {\mathbf {H} _ {d o w n}} {= \mathbf {W} _ {d o w n} ^ {\top} (\hat {\mathbf {X}})} \\ {\mathbf {H} _ {u p}} {\mathbf {\Gamma}} = \mathbf {W} _ {u p} ^ {\top} (\mathrm {N o n L i n e a r} (\mathbf {H} _ {d o w n})) \\ \mathbf {H} _ {o u t} \quad = \text {L a y e r N o r m} (\mathbf {H} _ {u p}), \\ \end{array} +$$ + +where $\mathbf{W}_{down} \in \mathbb{R}^{d \times r}(r < d)$ and $\mathbf{W}_{up} \in \mathbb{R}^{r \times d}$ are projection parameters to be updated during training. We use SiLU (Elfwing et al., 2017) for the non-linear layer and apply Layer Norm (Ba et al., 2016) on $\mathbf{H}_{up}$ , observing that without layer norm, $\mathbf{H}_{up}$ often grows quickly and gradients explode. + +Finally, we compute the attentions by calculating the product between $\hat{\mathbf{P}}_j$ and $\mathbf{H}_{out}$ , and apply softmax over the prompts, as follows: + +$$ +a _ {j} = \frac {e ^ {\mathbf {P} _ {j} \mathbf {H} _ {\text {o u t}}} / T}{\sum_ {k = 1} ^ {t + 1} e ^ {\mathbf {P} _ {k} \mathbf {H} _ {\text {o u t}}} / T}, \tag {3} +$$ + +where $T$ is a softmax temperature (Radford et al., 2021) and scale the logits in Eq. 3 to avoid making the attention module over-confident. + +# 3.2.2 Prompt Interpolation + +The final soft prompt for the instance $\mathbf{X}$ is calculated as the weighted sum of the prompts given the attention generated by Eq. 3: + +$$ +\mathbf {P} _ {\text {i n s t a n c e}} (\mathbf {X}) = \mathbf {P} _ {\text {t a r g e t}} + \sum_ {j = 1} ^ {t + 1} a _ {j} \mathbf {P} _ {j}. \tag {4} +$$ + +The second term on the right differs for different instances of the same task, while the $\mathbf{P}_{\text {target }}$ term is task-specific. The attentions act as a gate to control the influences from different prompts and enable a flexible composition of knowledge from multiple tasks. As shown in Eq. 4, the selection of $1 + a_{t+1}$ weights for the target-task-specific prompt $\mathbf{P}_{\text {target }}(= \mathbf{P}_{t+1})$ enables ATTEMPT to down-play the role of source prompts if the knowledge from none of the sources tasks is useful for the instance $\mathbf{X}$ , while always keeping the influence of $\mathbf{P}_{\text {target }}$ so that it will be properly updated during training. + +# 3.3 Multi-task Training and Inference + +Training. ATTEMPT can jointly train the attention module $\mathcal{G}$ and multiple target task prompts. Here, we explain our approach on multi-task learning over a group of target tasks by sharing $\mathcal{G}$ . + +It first concatenates the training datasets, while keeping each task ID information. During training, we retrieve the target-task prompt corresponding to the instance given the task ID, calculate attentions over the set of the prompts and produce instance-wise prompt as described in Section 3.2. The loss for each target task prompt only backpropagates when the prompt is used, while the weights of the attention module is updated at each iteration. + +This way, target tasks are loosely connected and together contribute to an improved and task-agnostic attention module, which is particularly effective when the target task training data is small. Moreover, this reduces the number of parameters to be updated per task and improves the efficiency of inference time. + +Inference. At inference time, we load source prompts, all of the target task prompts and the shared $\mathcal{G}$ just once. For each instance, ATTEMPT retrieves the target task prompt and produces $\mathbf{P}_{\text {instance }}$ as in Eq. 4, and then concatenates $\mathbf{P}_{\text {instance }}$ to the input embedding. The inference process after producing instance prompt is exactly the same as in prompt tuning. + +ATTEMPT enables loading multiple target task prompts and performing multiple target tasks simultaneously, significantly reducing the inference time model loading overhead. Existing approaches such as full fine-tuning or Adapter requires model loading for different target tasks, making its multi-task inference pipeline complicated. + +# 3.4 Parameter Efficiency of ATTEMPT + +For each task, we will introduce a new trainable soft prompt $m \times d$ , where $m$ is the length of the prompts and $d$ is the LM's dimension. An attention module consists of two projection matrices and a layer norm, resulting in $d \times r + r \times d + 2d = 2rd + 2d$ parameters, where $r$ is the projection dimension. As this can be shared across $N$ target tasks, the additional parameters per task will be: $d \times m + \frac{2rd + 2d}{N} = d(m + 2(r + 1)/N)$ . A unique characteristic of ATTEMPT or prompt tuning is their independence from the number of the LM layers; With Adapter or fine-tuning, the number of the parameters quickly increases as the backbone LMs + +get larger. ATTEMPT, in contrast, updates only the soft prompts and do not modify the LM higher layers, resulting in moderate parameter increases compared to other approaches. When we use T5-XL as a backbone LM, Adapter and BitFit updates about 6 million and 2 million parameters respectively, while ATTEMPT only updates and stores $172\mathrm{k}$ parameters per task (Figure 7 in Appendix). + +# 4 Experiments + +# 4.1 Source and Target Tasks + +We use 6 large-scale datasets as source tasks, and evaluate on 21 diverse target tasks including entailment, paraphrase detection, sentiment analysis, question answering (QA), commonsense reasoning. Datasets details are in Appendix Section B. + +Source tasks. We use the following datasets with more than 100k annotations in total from GLUE, SuperGLUE and MRQA for source prompts: MNLI (Williams et al., 2018), QNLI (Demszky et al., 2018), QQP (Wang et al., 2019b), SST-2 (Socher et al., 2013), SQuAD (Rajpurkar et al., 2016), and ReCoRD (Zhang et al., 2018). + +GLUE and SuperGLUE. We use 8 GLUE tasks (Wang et al., 2019b) and 5 SuperGLUE (Wang et al., 2019a) tasks as target datasets to test the model's natural language understanding abilities: BoolQ (Clark et al., 2019), CB (De Marneffe et al., 2019), MultiRC (Khashabi et al., 2018), WiC (Pilehvar and Camacho-Collados, 2019), WSC (Levesque et al., 2012), RTE (Giampiccolo et al., 2007), CoLA (Warstadt et al., 2019), STS-B (Cer et al., 2017), MRPC (Dolan and Brockett, 2005), MNLI, QQP, QNLI and SST-2. Four of the GLUE datasets used as source tasks (MNLI, QQP, SST-2 and QNLI) are also included as target tasks to provide comprehensive comparisons with prior parameter-efficient tuning methods, whose evaluations often focus on GLUE (Lester et al., 2021; Ben Zaken et al., 2022). + +Question answering. We use the MRQA 2019 shared task (Fisch et al., 2019) data to test on four large-scale QA datasets: Natural Questions (NQ; Kwiatkowski et al. 2019), HotpotQA (HQ; Yang et al. 2018), NewsQA (News; Trischler et al. 2017) and SearchQA (SQA; Dunn et al. 2017). + +Others. We experiments on four different datasets, whose tasks are related to the source tasks but domains differ. SciTail (Khot et al., 2018) + +is a scientific entailment dataset. Yelp-2 (Zhang et al., 2015) is a sentiment analysis dataset on Yelp reviews. WinoGrande (Sakaguchi et al., 2020) is commonsense reasoning task in multiple choice format. PAWS-Wiki (Zhang et al., 2019) is a Wikipedia-based paraphrase detection dataset. + +# 4.2 Baselines and Implementation Details + +Baselines. We compare ATTEMPT with: finetuning (FT); prompt tuning (PT; Lester et al. 2021), where target prompt embeddings are initialized by randomly sampled top vocabularies; SPoT (Vu et al., 2022), where target prompts are initialized by source prompt embeddings trained on other tasks (details are in Appendix); Adapter (Houlsby et al., 2019), AdapterDrop (Rücklé et al., 2021) and BitFit (Ben Zaken et al., 2022). On GLUE, we also compare ATTEMPT with several state-of-the-art multi-task methods, which train a single model on different tasks: FT-multi-task (FT-m), Adapter-m, HyperFormer (Mahabadi et al., 2021b), HyperDecoder (Ivison and Peters, 2022), and AdapterFusion (Pfeiffer et al., 2021). + +Implementation details. Although our methods, ATTEMPT and ATTEMPT-m use the same six source task prompts, ATTEMPT-m trains a shared attention layer across multiple target tasks by conducting multi-task training, while ATTEMPT trains a task-specific attention layer separately. Unless specified, we use T5-base as our base LMs for ATTEMPT and all of the baselines. If a dataset does not have public test split with annotations, we use a development set as our test set or split the development set into our development and test sets, following Mahabadi et al. (2021a). We train for 20 epochs on small datasets with less than 10k examples, 10 epochs on medium-size data with more than 10k examples, and 5 epochs on MRQA datasets and limit the maximum training data number of Yelp-2 to be 100k samples. To make $\mathcal{G}$ learn a good prompt composition for efficient knowledge transfer, we introduce different learning rates for $\mathcal{G}$ (Ponti et al., 2022) and also pre-train and trans + +
GLUESuper GLUE
data(# of train)param / taskMNLI (393k)QQP (364k)QNLI (105k)SST-2 (67k)STS-B (7k)MRPC (3.7k)RTE (2.5k)CoLA (8.5k)avg.Multi (5.1k)Bool (9.4k)WiC (6k)WSC (554)CB (250)avg.
Fine-tuning220M86.891.693.094.689.790.271.961.884.972.881.170.259.685.773.9
Adapter1.9M86.590.293.293.890.785.371.964.084.575.982.567.167.385.775.7
AdapterDrop1.1M86.390.293.293.691.486.371.262.784.472.982.368.367.385.775.3
BitFit280k85.390.193.094.290.986.867.658.283.374.579.670.059.678.672.5
PT77k81.389.792.890.989.568.154.710.672.258.761.748.951.967.957.8
SPoT77k85.490.193.093.490.079.769.857.182.374.077.267.050.046.462.9
Fine-tuning-m†28M85.791.192.092.588.890.275.454.983.8------
Adapter-m†1.8M86.390.593.293.089.990.270.361.584.4------
HyperFormer†638k85.790.093.094.089.787.275.463.784.8------
HyperDecoder‡1.8M86.090.593.494.090.587.771.755.983.7------
AdapterFusion*-84.290.7-92.2-90.376.8---76.3--92.1-
ATTEMPT232k84.390.393.093.289.785.773.457.483.474.478.866.853.878.670.5
ATTEMPT -m96k83.890.093.193.790.886.179.964.385.274.478.366.569.282.174.1
+ +Table 1: Results on GLUE. All of the results are based on T5-base models. For GLUE experiments, we exclude SQuAD and ReCoRD from source prompts inventories for comparison with prior work. We use Pearson Correlation for STS-B, F1 for MultiRC (Multi), and accuracy for other tasks as metrics. “param/task” denotes the number of the parameters trained for each task in GLUE. † from Mahabadi et al. (2021b); ‡ from Ivison and Peters (2022); * from Pfeiffer et al. (2021) and their base LM is RoBERTa-base (Liu et al., 2019b). + +
data(# of train)params /taskNQ(100k)HP(72k)SQA(117k)News(74k)Avg.WG(40k)Yelp(100k)SciTail(27k)PAWS(49k)Avg.
Fine-tuning220M75.177.581.165.274.761.996.795.894.187.1
Adapter1.9M74.277.681.465.674.759.296.994.594.386.2
BitFit280k70.775.577.764.172.057.294.794.792.084.7
Prompt tuning77k67.972.975.761.169.449.695.187.955.872.1
SPoT-t77k68.274.875.358.269.150.495.491.291.182.0
ATTEMPT232k70.475.277.362.871.457.696.793.192.184.9
ATTEMPT-m134k72.576.778.063.972.858.696.294.692.885.6
+ +Table 2: Results on MRQA 2019 QA datasets, WinoGrande (WG), Yelp, Scitail and PAWS. We use F1 for MRQA and accuracy for others. "param/task" denotes parameter trained per task in MRQA and others. + +fer the weights of $\mathcal{G}$ from the source tasks. More experimental details are in Appendix. + +Prompt initialization. Each source prompt is initialized by randomly sampling tokens from the top vocabularies as in Lester et al. (2021). For target task prompt initialization, we use the MNLI source prompt for non-QA tasks and the SQuAD source prompt for QA, instead of initializing it with randomly sampled vocabularies for training stability. + +# 5 Results + +We present main results in Section 5.1 and few-shot domain transfer experiments on sampled tasks in Section 5.2, demonstrating the effectiveness of ATTEMPT especially when the data is scarce. Section 5.3 further provides set of analyses. + +![](images/6a76c8505e84a5062e47a665f120c46714b30a4a27675a2f3f292b7eb0bc3bf6.jpg) +(a) GLUE +Figure 3: Parameter-efficiency and average scores. We use T5-base for all of the models. + +![](images/d4ee3b57df0f8f28d01a6085f1c3040fba3c85a365de933635a10a7917d813bb.jpg) +(b) SuperGLUE + +# 5.1 Main Results + +Tables 1 and 2 present the per-task performance of the GLUE and SuperGLUE datasets, and the other datasets, respectively. + +Performance vs. efficiency. Figures 3a and 3b compare the performance of different models versus their number of updated parameters on GLUE + +and SuperGLUE. ATTEMPT-m significantly outperforms PT, SPoT and BitFit by a large margin, and matches Adapter or Fine-tuning despite updating much fewer parameter per each task and keeping the LM completely frozen. Table 1 shows ATTEMPT outperforms all of the multi-task baselines including recent HyperFormer or HyperDecoder. In addition to competitive performance on GLUE/SuperGLUE, Table 2 shows that ATTEMPT-m achieves 72.8 MRQA average F1, outperforming BitFit using twice as many parameters. Moreover, ATTEMPT-m yields $85.6\%$ average accuracy on WinoGrande, Yelp, SciTail and PAWS, outperforming BitFiT $(84.7\%)$ and matching Adapter $(86.2\%)$ that updates ten times more parameters. + +# ATTEMPT largely improves prompt tuning. + +As pointed out by prior work (Mahabadi et al., 2021a; Lester et al., 2021; Sung et al., 2022), prompt tuning is sensitive to hyperparameters or initialization, and it has significantly lower performance on several datasets such as CoLA $(10.2\%)$ , BoolQ $(61.7\%)$ or WiC $(48.9\%)$ . SPoT (Vu et al., 2022) improves the target task prompt initialization with a prompt trained on other related tasks, but it still under-performs other approaches, and requires searching the source tasks beforehand. ATTEMPT largely outperforms those approaches on smaller datasets (e.g., CB, RTE), as well as large-scale MRQA datasets as shown in Table 2. + +# 5.2 Few-shot Domain Adaptations + +As shown in Table 2 ATTEMPT is particularly competitive on smaller dataset (e.g., RTE, WSC). Following Mahabadi et al. (2021b), we conduct few-shot experiments on BoolQ, CB and SciTail, to further verify the effectiveness of ATTEMPT under resource-constrained setup. Here, all of the models (Fine-tuning, Adapter, HyperFormer, SPoT and ATTEMPT) are first trained on the GLUE tasks and then transferred to new tasks using only $k$ ( $k = 4, 16, 32$ ) randomly sampled training data. More details of few-shot domain adaptation experiments are available at Appendix. + +Table 3 shows that ATTEMPT significantly outperforms other methods in most of the setting. This indicate the effectiveness of transferring knowledge from multiple source tasks in a non-destructive manner in few-shot domain adaptation. + +
k-shotFTADSPoTHFATP
BoolQ450.553.550.548.061.8
1656.551.450.650.260.0
3258.454.561.258.365.3
CB457.851.171.451.182.1
1677.074.864.374.878.5
3281.885.164.381.585.7
SciTail479.679.569.682.080.2
1680.083.371.986.679.5
3282.085.171.985.980.2
+ +Table 3: Few-shot results ( $k = \{4,16,32\}$ ). FT, AD, HF and ATP denote Fine-tuning, Adapter, HyperFormer (Mahabadi et al., 2021b) and ATTEMPT. + +![](images/890ad6e3e045a6e1d834dd32be07f64c7e8e6f6dbd66a7cc883d46154e8d7124.jpg) +(a)BoolQ + +![](images/a499ebc45844e45eca250cfccc4381c509a55e519eb5173c182c8ba19791b5e7.jpg) +(b) MultiRC + +![](images/5a4fd80b3657e0a87b9bf58852cf9ebaaa50caca01b8ca771f50e401ab358a50.jpg) +(c) WiC +Figure 4: Performance with different backbone LMs. + +# 5.3 Analyses + +Power of scale. We empirically analyze how increasing the backbone LM size affects ATTEMPT performance. Figure 4 summarizes the performance of Adapter, ATTEMPT, prompt tuning (PT), and fully fine-tuned (FT) models vs. LM sizes on three SuperGLUE datasets.4 ATTEMPT largely benefits from backbone LM size increase. This is aligned with the finding of Lester et al. (2021) that show prompt tuning is particularly effective when the backbone LM is larger. Moreover, ATTEMPT matches fully fine-tuned models even with T5-base or T5-large. This is in contrast to prompt tuning methods that suffers when the backbone LM is smaller. Furthermore, ATTEMPT performs on par with or outperforming Adapter with T5-3B, while updating 37 times less parameters. + +Ablation studies. We compare different variants of ATTEMPT to see the effect of each of the design choices. We ablate ATTEMPT with (a) no target, which neither initializes nor adds target task prompts in Eq. 4, to assess the feasibility of adapting to a new task by only interpolating pre + +
BoolQNewsQAWG
ATTEMPT-m78.2961.5858.57
ATTEMPT77.0661.8457.61
no target50.8955.2647.89
no attention73.5752.5556.03
single prompt76.2560.9255.56
+ +Table 4: Results of ablation studies. "WG" denotes WinoGrande. For NewsQA ablation, we used randomly sampled 10k data for training for quick ablation. + +![](images/56b08dd02c067567bf71cee8735e79ae865c30208ad6d31813a9f53cdda21d55.jpg) +(a) RTE +Figure 5: Performance on RTE and BoolQ dev sets when source prompts are added one by one, starting from the MNLI source prompt only. + +![](images/1a42d256d75ceea49b437664eb4765646ababc7242be2017c1d05f3afe7c0574.jpg) +(b)BoolQ + +trained source prompts; (b) no attention, which gives constant score $a_{j} = 1 / t$ to all source prompts in Eq. 3, discarding attentions; (c) single prompt, which uses only a single source prompt to assess the effect of transferring knowledge from multiple tasks. Single prompt ablation is similar to SPoT except that instead of using source prompts for initialization and updating its during training, we keep the source prompt frozen while updating the target task prompt and the attention layers. + +Table 4 indicates that all components contribute to performance improvements. Adding a trainable target-task-specific prompt (no target) is crucial to achieve good performance on all of the datasets, especially on BoolQ and WinoGrande. Constant attention causes large performance drop, especially on BoolQ and NewsQA, indicating that it is important to have learned attentions rather than simply averaging the multiple source prompts. Although the single prompt ablation baseline outperforms SPoT, possibly due to the non-destructive soft prompt transfer of ATTEMPT, there is notable performance decline relative to ATTEMPT. This demonstrates the effectiveness of leveraging multiple soft prompts to transfer knowledge from multiple diverse tasks. + +Modularity: effects of variable source prompts. We study the modular nature of ATTEMPT that enables flexibly adding or removing source tasks. Figure 5 shows how including source tasks affects the + +![](images/f2d719a2ff7ab16e50aad001668a0f6fb252fc8dfaf0edf3bd8fb06f2998bea6.jpg) +Figure 6: Attention visualizations of ATTEMPT. + +final performance of ATTEMPT on two benchmarks, BoolQ and RTE. On both of the datasets, adding more source task prompts gives performance improvements, with an exception of adding SQuAD and ReCoRD on RTE ("full" in Figure 5a). This potentially happens because of the negative transfer due to the different natures of QA and RTE, while adding the two QA source prompts helps in BoolQ. + +Interpretability: analysis on attentions. Figure 6 shows the attention weight matrix between source and target tasks by ATTEMPT. Note that for the target task prompt, we present the $a_{t+1}$ weight before adding 1. Attention patterns differ for different tasks. Generally, $\mathcal{G}$ gives higher attentions to related source tasks: Yelp → SST-2, or PAWS-Wiki → QQP, which are the same tasks but are different in domains. QQP is often highly attended by some tasks that are seemingly different from paraphrasing (e.g., MultiRC, WNLI), which may indicate underlying task similarities between those tasks. Unlike the underlying task similarities, MNLI is not highly attended by some highly-related target tasks such as RTE. We hypothesize that this is because the target task prompts for those tasks are initialized with the MNLI source prompt, and thus ATTEMPT may try to attend to other tasks. On WinoGrande or SciTail, $\mathcal{G}$ gives large attentions to the target task embeddings ("target"); this maybe because those two tasks have significantly different task format or input domain, and $\mathcal{G}$ ignores source prompts more. + +# 6 Related Work + +Parameter-efficient tuning. Here, we enlist additional parameter-efficient tuning methods that are close to our work. AdapterFusion (Pfeiffer et al., 2021) compose multiple different adapters by learning task-specific compositions on each task, and Friedman et al. (2021) take an average of multiple adapter layers after training adapters individually + +on different QA datasets. HyperFormer (Mahabadi et al., 2021b) and HyperDecoder (Ivison and Peters, 2022) train a shared hyper network to generate parameters of adapter layers. Qin and Eisner (2021) introduce a mixture of soft prompts, where predictions given different prompts are ensembled for the same knowledge base relationship types. IDPG (Wu et al., 2022) and Instance-Dependent Prompt Tuning (Levine et al., 2022) learn to generate instance-wise prompts given the input encoded by LMs. Compared to the previous work, our main focus is transferring knowledge from multiple tasks to produce soft prompts rather than learning to generate them from scratch, and is much more efficient in terms of parameters and inference time. + +Concurrent to our work, Liu et al. (2022) introduce $(\mathrm{IA})^3$ that multiplies intermediate activation by learned vectors for few-shot learning. Wang et al. (2022b) shows that combining a set of prompts retrieved from the prompt pool by a key-value mechanism yields competitive performance in computer vision continual learning. For generation tasks, Li et al. (2022) transfer multiple source prompts using multi-key memory network for prompt clustering and multi-head attention taking another LM output. In contrast, we present an efficient multi-task tuning that is effective in diverse NLP tasks. More importantly, prior work often relies on priors such as pre-computed clusters or another LM's predictions of which tasks should be used as source tasks. ATTEMPT removes the necessity of such priors by training an attention layer that learn to focus on relevant source tasks. + +Several recent lines of research attempt to adapt a massive multi-task LM trained with instructions or demonstrations to a new task without any parameter updates (Sanh et al., 2022; Min et al., 2021; Wang et al., 2022a; Wei et al., 2022). The main focus of this paper is how to efficiently transfer rich multi-task knowledge from source tasks to target tasks with training data during target task training, while those work often emphasize on zero or few-shot transfer without any parameter updates. + +Modular multi-task training. There is a large literature on composing multiple separate networks to handle different sub-tasks (Jacobs et al., 1991b,a; Andreas et al., 2016; McCann et al., 2018). As the LM size expands, several recent work tries to sparsely activate or employ light-weight modules for efficient multi-task learning (Gupta et al., 2022; Ponti et al., 2022; Fedus et al., 2022). In particu + +lar, we share the same intuition as the concurrent work (Ponti et al., 2022), which combines several skills encapsulated in parameter-efficient modules; however, our main focus is on how to transfer and share knowledge from resource-rich tasks in a super parameter-efficient way, while they focus on improving few-shot generalization ability. Moreover, ATTEMPT keeps LMs intact and updates fewer parameters. + +# 7 Conclusion + +We present a new parameter-effluent tuning method ATTEMPT, which learns to produce instance-wise prompts by interpolating multiple reusable soft prompts trained on source tasks and a new task-specific prompt, while keeping the original LM frozen. Our large-scale experiments demonstrate that ATTEMPT achieves a great trade-off between task performance and efficiency, introducing an interpretable and modular task transfer. + +# Limitations + +Despite its parameter-efficiency and strong empirical results, ATTEMPT has several limitations: First, as prompt tuning increases the input token length by $m$ prompt tokens, it increases the memory footprint and computational costs (Mahabadi et al., 2021a), although Lester et al. (2021) found that prompt length can be shortened when larger LMs are used as backbone models. We investigate this issue in Appendix Section C.10. Secondly, as the first step toward multi-task knowledge transfer via soft prompts, our evaluation focuses on classification and QA tasks, and our target tasks do not include the tasks that require long sequence generations (e.g., summarization). Future work can explore applications of ATTEMPT to more diverse sets of tasks. In addition, we use representative six NLP tasks as source tasks, but do not explore a large-scale experiments on many source task combinations. We will release pretrained source prompts and easily extendable code to facilitate future work on multi-task transfer via soft prompt transfer. Lastly, we do not test ATTEMPT on non-English tasks, and we will investigate the effectiveness of ATTEMPT in non-English languages or apply ATTEMPT for cross-lingual transfer. + +# Ethics Statement + +ATTEMPT is trying to improve parameter-efficiency and transferrability of models so that groups with + +limited computational resources can still get benefit from state-of-the-art large-scale models. All of the experiments are based on widely-used general purpose datasets, which are unlikely to include harmful content. However, several datasets such as Yelp Review are created from existing review sites, and may have more risks of privacy issues or harmful content than some other datasets based on news or encyclopedic websites. + +# Acknowledgement + +This research was supported by NNSF IIS-2044660, ONR N00014-18-1-2826, a Sloan fellowship and gifts from AI2, and the Nakajima Foundation Fellowship. We thank UW NLP and Allen NLP group members for their insightful discussion and Sandy Kaplan, Sewon Min, Ofir Press, and Yizhong Wang for their helpful feedback on this paper. + +# References + +Armen Aghajanyan, Anchit Gupta, Akshat Shrivastava, Xilun Chen, Luke Zettlemoyer, and Sonal Gupta. 2021. Muppet: Massive multi-task representations with pre-finetuning. In EMNLP. +Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. 2016. Neural module networks. In CVPR. +Vamsi Aribandi, Yi Tay, Tal Schuster, Jinfeng Rao, Huaixiu Steven Zheng, Sanket Vaibhav Mehta, Honglei Zhuang, Vinh Q. Tran, Dara Bahri, Jianmo Ni, Jai Gupta, Kai Hui, Sebastian Ruder, and Donald Metzler. 2022. Ext5: Towards extreme multi-task scaling for transfer learning. In ICLR. +Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450. +Elad Ben Zaken, Yoav Goldberg, and Shauli Ravfogel. 2022. BitFit: Simple parameter-efficient fine-tuning for transformer-based masked language-models. In ACL. +Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. In NeurIPS. +Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez-Gazpio, and Lucia Specia. 2017. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017). + +Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. PaLM: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311. +Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. 2019. BoolQ: Exploring the surprising difficulty of natural yes/no questions. In *NAACL*. +Marie-Catherine De Marneffe, Mandy Simons, and Judith Tonhauser. 2019. The commitmentbank: Investigating projection in naturally occurring discourse. In Sinn und Bedeutung 23. +Dorottya Demszky, Kelvin Guu, and Percy Liang. 2018. Transforming question answering datasets into natural language inference datasets. arXiv preprint arXiv:1809.02922. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *NAACL*. +Jesse Dodge, Gabriel Ilharco, Roy Schwartz, Ali Farhadi, Hannaneh Hajishirzi, and Noah Smith. 2020. Fine-tuning pretrained language models: Weight initializations, data orders, and early stopping. arXiv preprint arXiv:2002.06305. +William B. Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP). +Matthew Dunn, Levent Sagun, Mike Higgins, V Ugur Guney, Volkan Cirik, and Kyunghyun Cho. 2017. SearchQA: A new q&a dataset augmented with context from a search engine. arXiv preprint arXiv:1704.05179. +Stefan Elfwing, Eiji Uchibe, and Kenji Doya. 2017. Sigmoid-weighted linear units for neural network function approximation in reinforcement learning. arXiv preprint arXiv:1702.03118. +William Fedus, Barret Zoph, and Noam Shazeer. 2022. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. JMLR. +Adam Fisch, Alon Talmor, Robin Jia, Minjoon Seo, Eunsol Choi, and Danqi Chen. 2019. MRQA 2019 shared task: Evaluating generalization in reading comprehension. In Proceedings of the 2nd Workshop on Machine Reading for Question Answering. +Dan Friedman, Ben Dodge, and Danqi Chen. 2021. Single-dataset experts for multi-dataset question answering. In EMNLP. +Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and Bill Dolan. 2007. The third PASCAL recognizing textual entailment challenge. In Proceedings of the + +ACL-PASCAL Workshop on Textual Entailment and Paraphrasing. +Shashank Gupta, Subhabrata Mukherjee, Krishan Subudhi, Eduardo Gonzalez, Damien Jose, Ahmed H Awadallah, and Jianfeng Gao. 2022. Sparsely activated mixture-of-experts are robust multi-task learners. arXiv preprint arXiv:2204.07689. +Junxian He, Chunting Zhou, Xuezhe Ma, Taylor Berg-Kirkpatrick, and Graham Neubig. 2022. Towards a unified view of parameter-efficient transfer learning. In ICLR. +Dan Hendrycks and Kevin Gimpel. 2016. Gaussian error linear units (GELUs). arXiv preprint arXiv:1606.08415. +Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for NLP. In ICML. +Hamish Ivison and Matthew E Peters. 2022. Hyperdecoders: Instance-specific decoders for multi-task NLP. arXiv preprint arXiv:2203.08304. +Robert A Jacobs, Michael I Jordan, and Andrew G Barto. 1991a. Task decomposition through competition in a modular connectionist architecture: The what and where vision tasks. Cognitive science. +Robert A Jacobs, Michael I Jordan, Steven J Nowlan, and Geoffrey E Hinton. 1991b. Adaptive mixtures of local experts. Neural computation. +Daniel Khashabi, Snigdha Chaturvedi, Michael Roth, Shyam Upadhyay, and Dan Roth. 2018. Looking beyond the surface: A challenge set for reading comprehension over multiple sentences. In *NAACL*. +Daniel Khashabi, Sewon Min, Tushar Khot, Ashish Sabharwal, Oyvind Tafjord, Peter Clark, and Hannaneh Hajishirzi. 2020. UNIFIEDQA: Crossing format boundaries with a single QA system. In EMNLP. +Tushar Khot, Ashish Sabharwal, and Peter Clark. 2018. Scitail: A textual entailment dataset from science question answering. In AAAI. +Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR. +Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: A benchmark for question answering research. TACL. +Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. In EMNLP. + +Hector J. Levesque, Ernest Davis, and Leora Morgenstern. 2012. The winograd schema challenge. In Proceedings of the Thirteenth International Conference on Principles of Knowledge Representation and Reasoning. +Yoav Levine, Itay Dalmedigos, Ori Ram, Yoel Zeldes, Daniel Jannai, Dor Muhlgay, Yoni Osin, Opher Lieber, Barak Lenz, Shai Shalev-Shwartz, et al. 2022. Standing on the shoulders of giant frozen language models. arXiv preprint arXiv:2204.10019. +Junyi Li, Tianyi Tang, Jian-Yun Nie, Ji-Rong Wen, and Wayne Xin Zhao. 2022. Learning to transfer prompts for text generation. In *NAACL*. +Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. In ACL. +Haokun Liu, Derek Tam, Mohammed Muqeeth, Jay Mohta, Tenghao Huang, Mohit Bansal, and Colin Raffel. 2022. Few-shot parameter-efficient fine-tuning is better and cheaper than in-context learning. arXiv preprint arXiv:2205.05638. +Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019a. Multi-task deep neural networks for natural language understanding. In ACL. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. +Rabeeh Karimi Mahabadi, James Henderson, and Sebastian Ruder. 2021a. Compacter: Efficient low-rank hypercomplex adapter layers. In NeurIPS. +Rabeeh Karimi Mahabadi, Sebastian Ruder, Mostafa Dehghani, and James Henderson. 2021b. Parameter-efficient multi-task fine-tuning for transformers via shared hypernetworks. In ACL. +Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. 2018. The natural language decathlon: Multitask learning as question answering. arXiv preprint arXiv:1806.08730. +Sewon Min, Mike Lewis, Luke Zettlemoyer, and Hannaneh Hajishirzi. 2021. MetaICL: Learning to learn in context. In NAACL. +Marius Mosbach, Maksym Andriushchenko, and Dietrich Klakow. 2021. On the stability of fine-tuning BERT: Misconceptions, explanations, and strong baselines. In ICLR. +Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. PyTorch: + +An imperative style, high-performance deep learning library. In NeurIPS. +Jonas Pfeiffer, Aishwarya Kamath, Andreas Rückle, Kyunghyun Cho, and Iryna Gurevych. 2021. AdapterFusion: Non-destructive task composition for transfer learning. In EACL. +Mohammad Taher Pilehvar and Jose Camacho-Collados. 2019. WiC: the word-in-context dataset for evaluating context-sensitive meaning representations. In NAACL. +Edoardo M Ponti, Alessandro Sordoni, and Siva Reddy. 2022. Combining modular skills in multitask learning. arXiv preprint arXiv:2202.13914. +Guanghui Qin and Jason Eisner. 2021. Learning how to ask: Querying LMs with mixtures of soft prompts. In NAACL. +Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In ICML. +Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. JMLR. +Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In EMNLP. +Andreas Rückle, Gregor Geigle, Max Glockner, Tilman Beck, Jonas Pfeiffer, Nils Reimers, and Iryna Gurevych. 2021. AdapterDrop: On the efficiency of adapters in transformers. In EMNLP. +Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2020. WinoGrande: An adversarial winograd schema challenge at scale. In AAAI. +Victor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Teven Le Scao, Stella Biderman, Leo Gao, Thomas Wolf, and Alexander M Rush. 2022. Multitask prompted training enables zero-shot task generalization. In ICLR. +Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In EMNLP. + +Yi-Lin Sung, Jaemin Cho, and Mohit Bansal. 2022. Lst: Ladder side-tuning for parameter and memory efficient transfer learning. arXiv preprint arXiv:2206.06522. +Alon Talmor and Jonathan Berant. 2019. MultiQA: An empirical investigation of generalization and transfer in reading comprehension. In ACL. +Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. 2017. NewsQA: A machine comprehension dataset. In Proceedings of the 2nd Workshop on Representation Learning for NLP. +Tu Vu, Brian Lester, Noah Constant, Rami Al-Rfou', and Daniel Cer. 2022. SPoT: Better frozen model adaptation through soft prompt transfer. In ACL. +Tu Vu, Tong Wang, Tsendsuren Munkhdalai, Alessandro Sordoni, Adam Trischler, Andrew Mattarella-Micke, Subhransu Maji, and Mohit Iyyer. 2020. Exploring and predicting transferability across NLP tasks. In EMNLP. +Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019a. Superglue: A stickier benchmark for general-purpose language understanding systems. In NeurIPS. +Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019b. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In *ICLR*. +Yizhong Wang, Swaroop Mishra, Pegah Alipoormolabashi, Yeganeh Kordi, Amirreza Mirzaei, Anjana Arunkumar, Arjun Ashok, Arut Selvan Dhanasekaran, Atharva Naik, David Stap, et al. 2022a. Benchmarking generalization via in-context instructions on 1,600+ language tasks. arXiv preprint arXiv:2204.07705. +Yu-An Wang and Yun-Nung Chen. 2020. What do position embeddings learn? an empirical study of pre-trained language model positional encoding. In EMNLP. +Zifeng Wang, Zizhao Zhang, Chen-Yu Lee, Han Zhang, Ruoxi Sun, Xiaqi Ren, Guolong Su, Vincent Perot, Jennifer Dy, and Tomas Pfister. 2022b. Learning to prompt for continual learning. In CVPR. +Alex Warstadt, Amanpreet Singh, and Samuel R. Bowman. 2019. Neural network acceptability judgments. TACL. +Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V Le. 2022. Finetuned language models are zero-shot learners. In International Conference on Learning Representations. +Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In *NAACL*. + +Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In EMNLP: System Demonstrations. +Zhuofeng Wu, Sinong Wang, Jiatao Gu, Rui Hou, Yuxiao Dong, VG Vydiswaran, and Hao Ma. 2022. IDPG: An instance-dependent prompt generation method. arXiv preprint arXiv:2204.04497. +Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In EMNLP. +Sheng Zhang, Xiaodong Liu, Jingjing Liu, Jianfeng Gao, Kevin Duh, and Benjamin Van Durme. 2018. Record: Bridging the gap between human and machine commonsense reading comprehension. arXiv preprint arXiv:1810.12885. +Wen Zhang, Lingfei Deng, Lei Zhang, and Dongrui Wu. 2020. A survey on negative transfer. arXiv preprint arXiv:2009.00909. +Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In NeurIPS, volume 28. +Yuan Zhang, Jason Baldridge, and Luheng He. 2019. PAWS: Paraphrase adversaries from word scrambling. In NAACL. + +# Appendix + +# A More Method Details + +# A.1 Improving Multi-task Training + +Learning effective interpolations of prompts is challenging, as input embeddings themselves do not necessarily correspond to meaningful prompt tokens, and we do not have any supervisions for the ground truth task mapping. We explore several approaches to improve the training with good inductive bias so that $\mathcal{G}$ learns a good prompt composition for efficient knowledge transfer. + +Learning attention prior. We pre-train the attention module on source tasks and then use the learned projection layers and the layer norm to initialize the attention module on the target task(s). This learned prior can be also directly used for tasks that lack training data. + +Two-speed learning rate. Ponti et al. (2022) shows that setting different learning rates for the composition module and the task-specific model parameters helps to provide useful inductive bias to encourage the model to learn the best skill composition. We also introduce this two-speed learning rate approach for ATTEMPT. + +# A.2 Pre-training $\mathcal{G}$ on Source Tasks + +To learn the attention prior for $\mathcal{G}$ , we run the same training process as in the target task training on the source tasks. In particular, we initialize another task-specific prompt for each source task, and trains both those task-specific prompts as well as the shared attention weights of $\mathcal{G}$ on the combinations of the source tasks as in Section 3.2. + +# A.3 Overview of Training + +Algorithm Table 5 presents the overview of the training algorithm. + +# A.4 Parameter Efficiency of ATTEMPT + +Figure 7 shows the number of the parameters to be updated for Prompt tuning, ATTEMPT, Adapter, and BitFit when we increase the size of the backbone LMs. As we can see, other parameter-efficient transfer approaches observe quick increases of the trainable parameters, while ATTEMPT shows small increases. In addition, ATTEMPT keeps the original LM frozen and does not modify the LM structures unlike those approaches. + +# A.5 Alternative Attention Design + +ATTEMPT computes the same attention scores over $m$ prompt tokens. Alternatively, we compute attention scores for each prompt token further flexibility and expressiveness. Here, instead of computing similarities between the summary representation of $\mathbf{H}_{out}$ and prompt $\hat{\mathbf{P}}_j$ , we compute similarities between $\mathbf{H}_{out}$ and each $l$ th prompt token as follows: + +$$ +a _ {l j} = \frac {e ^ {\mathbf {p} _ {l j} \mathbf {H} _ {o u t}}}{\sum_ {k = 1} ^ {t + 1} e ^ {\mathbf {p} _ {l k} \mathbf {H} _ {o u t}}}. \tag {5} +$$ + +For prompt token-level attention, in the second term on the right, each $l$ th prompt token in the summary representation is calculated as the weighted summary of the $l$ th prompt tokens. + +Empirically the token-level attentions gives similar performance to the original attention in Eq. 3, while in some tasks it gives notable performance improvements. Due to the additional computational overhead, we use the max-pooling based unified attention (Eq. 3) as our default attention mechanism. Interestingly, we find that the attention distributions are significantly different among prompt tokens in different locations (e.g., giving significantly higher attentions to the target task prompt in the later tokens), potentially because of the position biases of pretrained models (Wang and Chen, 2020). This is beyond the scope of this work, but is certainly of interest for future work. + +# B Task and Dataset Details + +We show the list of the datasets, tasks and domains for source tasks in Table 6 and for target tasks in Table 7, respectively. In summary, both source and target datasets cover diverse tasks, domains and output formats (i.e., span extraction, multiple-choice, classification). + +# C Experimental Details + +# C.1 Implementation Details + +We use PyTorch $^6$ (Paszke et al., 2019) and huggingface transformers $^7$ (Wolf et al., 2020) to implement our models. For Adapter, BitFit, prompt tuning and BitFit baselines, we use the implementations by Mahabadi et al. (2021a). $^8$ We use huggingface datasets $^9$ library to use the data for the + +$^{6}$ https://pytorch.org/ +7https://github.com/huggingface/transformers +8https://github.com/rabeehk/compacter +9https://github.com/huggingface/datasets + +# Source Prompt Training + +For $j$ th source tasks in $t$ source tasks, train a source prompt $\mathbf{P}_j$ by maximizing $p(\boldsymbol{y} \mid [\mathbf{P}_j, \mathbf{X}]$ individually (Section 3.1) [Eq. 2] + +# Target Prompt Training + +Initialization: initialize a new prompt $\mathbf{P}_{\text {target }}$ and attention module $\mathcal{G}$ + +For each instance $(\pmb{x},\pmb{y})$ , after passing $\pmb{x}$ to the embedding layer to get input embeddings $\mathbf{X}$ , + +Step 1: Compute instance-wise prompt $\mathbf{P}_{\text {instance }}$ for $\mathbf{X}$ (Section 3.2) + +1. calculate attentions between $\mathbf{X}$ and a set of prompts $[\mathbf{P}_1, \ldots, \mathbf{P}_t, \mathbf{P}_{\text{target}}]$ using $\mathcal{G}$ [Eq. 3] +2. interpolate $\mathbf{P}_1, \ldots, \mathbf{P}_t$ and $\mathbf{P}_{target}$ using attention scores [Eq. 4] + +Step 2: Prepend $\mathbf{P}_{\text {instance }}$ to $\mathbf{X}$ and feed the final input to frozen LM $\theta$ + +Step 3: Maximize $p(\pmb{y} \mid [\mathbf{P}_{\text{instance}}, \mathbf{X}])$ and backpropagate to $\mathbf{P}_{\text{target}}$ and $\mathcal{G}$ via $\mathbf{P}_{\text{instance}}$ [Eq. 2] + +Table 5: Training process of ATTEMPT. + +experiments except for MRQA 2019 shared task. For MRQA 2019 shared task, we download the original training and development data from the official repository. $^{10}$ + +# C.2 Source Prompt Training Details + +We fine-tune the source prompts on six large-scale datasets for 5 epochs. We use the checkpoints with the best development score as our source prompts. Each source prompt is initialized by randomly sampled tokens as in Lester et al. (2021). We found that although this random vocabulary based initialization is often unstable even in large-scale datasets, on the six source tasks, this approach gives reasonable performance, even with T5-small. + +# C.3 Attention Module Pretraining Details + +As the six source tasks have significantly different length of input context (e.g., the input context of MNLI, SST-2, QQP or QNLI is on average less than 200 tokens while SQuAD or ReCoRD have the context longer than 512 tokens), we split the source tasks into the two groups: (1) MNLI, SST-2, QQP and QNLI; (2) SQuAD and ReCoRD. We use the resulting pretrained weights from group (2) for MRQA 2019, while for other experiments, we use the weights from (1). + +# C.4 General hyperparameters + +We set the maximum token length to be 512 for MRQA datasets, 348 for MultiRC and 256 for all + +of other datasets. All of the experiments are conducted with a single GPU with 24 GB memory. On all of the datasets, training were completed within 24 hours. Per GPU batch size is 32, and for MRQA, we set the per GPU batchsize to be 16 and set the gradient accumulation step to 2 due to the out of memory error. + +# C.5 Hyperparameters for ATTEMPT + +We use $T = d \times \exp(1)$ , where $d$ is the LM dimension size, to control the soft max temperature in Section 3.2. The prompt length $m$ is 100 and the prompt tuning learning rate is 0.3 and optimize the objective function using Adam (Kingma and Ba, 2015). We set weight decay to be $1 \times 10^{-5}$ . For the projection layers, we use $r = 100$ . For the attention module $\mathcal{G}$ , we found that the best learning rate varies across datasets and tune it on the development sets. In particular, we use the learning rate of 0.1 for SuperGLUE, and Yelp, WinoGrande, SciTail and PAWS multi-task experiments, and 0.3 for the other experiments. + +# C.6 Hyperparameters for Baselines + +For all of the baselines, we set the warmup steps to be 500, use Adam for optimization with a linear learning rate scheduler. + +Prompt Tuning. As in ATTEMPT, we use the prompt length of $m = 100$ and use the learning rate of 0.3 for prompt tuning and set weight decay to be $1 \times 10^{-5}$ . + +
Dataset NameCategoryTaskDomainMetric
1. MNLIGLUEnatural language inference (NLI)variousaccuracy
2. SST-2GLUEsentiment analysisMovie Reviewsaccuracy
3. QQPGLUEparaphrase detectionsocial QA questions (Quora)accuracy & F1
4. QNLIGLUE QANLIWikipediaaccuracy
5. SQuADMRQA 2019extractive QAWikipediaF1 & EM
6. ReCoRDSuperGLUEcloze-style QAnews (CNN, Daily Mail)F1 & EM
+ +Table 6: The details of the 6 source tasks. MNLI, SST-2, QQP and QNLI are also used as target tasks in GLUE experiments. + +
Dataset NameCategoryTaskDomainMetric
1. CoLAGLUEacceptabilityvariousMatthews corr.
2. STS-BGLUEsentence similarityvariousPearson&Spearman corr.
3. MRPCGLUEparaphrase detectionnewsaccuracy & F1
4. RTEGLUENLINews, Wikipediaaccuracy
5. MultiRCSuperGLUEQAvariousF1 & EM
6. BoolQSuperGLUEboolean QAWikipediaaccuracy
7. WiCSuperGLUEword sense disambiguationlexical databasesaccuracy
8. WSCSuperGLUEcoreference / commonsensefiction booksaccuracy
9. CBSuperGLUENLIvariousaccuracy
10. NQMRQA 2019extractive QAWikipediaF1 & EM
11. HotpotQAMRQA 2019extractive QAWikipediaF1 & EM
12. SearchQAMRQA 2019extractive QASearch snippetsF1 & EM
13. NewsQAMRQA 2019extractive QANews articleF1 & EM
14. WinoGrandeOtherscoreference / commonsenseWikiHowaccuracy
15. YelpOtherssentiment analysisYelp reviewsaccuracy
16. SciTailOthersNLIscience examsaccuracy
17. PAWS-WikiOthersparaphrase detectionWikipediaaccuracy
+ +Table 7: The details of the 17 target tasks except for 4 GLUE datasets, which are also used for evaluation. "NQ" denotes Natural Questions and lexical databases for WiC include WordNet, VerbNet, Wiktionary. For the datasets where two metrics are originally used, we use the underlined metric as our primary metric. + +SPoT. We explore two approaches to initialize the target task prompt as in Vu et al. (2022): SPoT-generic (SPoT-g) and SPoT-targeted (SPoT-t). SPoT-g first pre-trains source prompts on eight GLUE tasks and then uses the source prompts to initialize target task prompts, while SPoT-t uses prompt similarities to find top- $k$ similar tasks and then initializes target task prompts using the top $k$ prompts. As we only use 6 source tasks in this work, we use top 1 similar prompt as the transfer source of SPoT-t. We use the same hyperparameters as in prompt tuning. To select the source task for SPoT-t, we run prompt tuning on all of the source and target tasks for 5 epochs for medium and large-scale datasets and 20 epochs for smaller scale datasets and then compute the cosine similarity between a target prompt and the set of the source prompts. Regarding the SPoT-g training, we train a single source prompt on the combination of the GLUE source tasks following Vu et al. (2022). We found that SPoT-g baseline is not strong on MRQA or Others (i.e., Yelp, Scitail, WinoGrande and PAWS-Wiki), while it gives small performance + +improvements on GLUE from SPoT-t in some tasks. Therefore, we use SPoT-t in our main experiments. + +Adapter. We use the default hyperparameters by Mahabadi et al. (2021a) for the Adapter baseline. We use GELU (Hendrycks and Gimpel, 2016) for non-linear layers, set the reduction factor to be 32 and the learning rate to be $3 \times 10^{-4}$ . + +BitFit. We use the learning rate of $3 \times 10^{-4}$ . + +Fine-tuning. We use the learning rate of $3 \times 10^{-4}$ . Other hyperparameters are the same as the hugging face transformers T5 models. + +# C.7 Multi-task Training Details + +The 17 datasets have significantly different length of input context, and training on the combinations of all of the datasets can make training inefficient. We conduct multi-tasking of 4 datasets (SuperGLUE, MRQA 2019, and others), while on GLUE, we train ATTEMPT-m on 8 GLUE tasks. We keep MultiRC training separated from other SuperGLUE tasks, as MultiRC has significantly longer context + +![](images/9b76aec91da3c8a523aa31e21a83980fbb785f8d96787b417e78dde445835c1b.jpg) +Figure 7: The number of the parameters to be updated with Adapter, BitFit, Fine-tuning, Prompt Tuning and ours using different backbone LMs. Ours and Ours-m denote ATTEMPT and ATTEMPT-m, respectively. + +
datasetsAdapterFine-tuning
BoolMRCWiCBoolMRCWiC
T5-small100100100100100100
T5-base64641003232100
T5-large322032323232
T5-3B448---
+ +Table 8: The number of the batch sizes for fine-tuned models and adapter for the scalability experiments. + +than other SuperGLUE datasets. We set the maximum length of the input to be 256, 256, 512, 256 for GLUE, SuperGLUE, MRQA 2019, and others task set, respectively. We set the maximum length of input to be 348 for MultiRC. + +# C.8 Few-shot Adaptation Experiments Details + +Following (Mahabadi et al., 2021b), we run few-shot adaptation experiments for three times and takes the mean of the performance. We cite the performance of the fine-tuning, Adapter and HyperFormer from Mahabadi et al. (2021b), and train a single prompt tuning model on 8 GLUE tasks and then transfer it to few-shot tasks. For ATTEMPT, we load the attention weights trained on 8 GLUE tasks. + +# C.9 Scaling Experiments Details + +During this experiment, we use only a single GPU with 24 GB GPU memory, as in our main experiments, to simulate a common resource environment. We found that under this computational constraint, we could not fine-tune the T5-3B model due to the out of memory error, even with a batch size + +of 1. Adapter, prompt tuning and ATTEMPT can be trained on a single GPU even with the T5-3B model. We provide the experimental details for the LM scaling experiments in Section 5.3. For ATTEMPT and prompt tuning, we use the same single GPU with 24 GB GPU memory as the main experiments. For Adapter and fine-tuning, we use a single GPU with 48 GB GPU memory but restrict GPU memory usage at 24 GB for a fair comparison. For the scalability experiments, we set the maximum token length to 216 across all datasets. + +Per-device batch size for ATTEMPT and prompt tuning. For T5 small and base, we set per-GPU batch size to be 100 and 32, while for T5-large and T5-XL (3B), we use the batch size of 16 and 2, respectively. + +Per-device batch size for Adapter. For Adapter experiments, we flexibly adjust the per-device batch size for each dataset to avoid out of the memory issues. The number of the per-device batch size is shown in Table 8. + +Per-device batch size for fine-tuning. Similarly in Adapter, we adjust the per-device batch size for the fine-tuned models. The number of the per-device batch size is shown in Table 8. For fine-tuned models, we found that we cannot avoid the out of memory issue even with the batch size of 1, so we report the results with T5 small, base and large. + +Performance Instability of fine-tuning with T5-large. We found that fine-tuning with T5-large is occasionally unstable and fails to learn a target task, and is sensitive to the batch size or learning rate. For instance, using different batch size results in $65\%$ BoolQ accuracy. For those cases, we explored several learning rates and batch sizes and report the best performance. Several prior work report the instability of fine-tuning large-scale LMs (Mosbach et al., 2021; Dodge et al., 2020). + +# C.10 Memory Footprints + +Despite its parameter-efficiency, prompt tuning based approaches increase the sequence length by preponding continuous emeddings in front of the original input sequence (Lester et al., 2021; Mahabadi et al., 2021a). We evaluate the memory footprint of full fine-tuning, Adapter, BitFit, prompt tuning and ATTEMPT. We use T5-base as a default base LM and set the per-gpu batch size to + +
memory footprint (base) (XL)
Fine-tuning9.0 GB-
Adapter5.9 GB14.5 GB
BitFit5.6 GB14.2 GB
Prompt Tuning8.5 GB15.9 GB
ATTEMPT (single)13.7 GB16.1 GB
+ +Table 9: The maximum memory footprint during training of fine-tuning, Adapter, BitFit, prompt tuning and ATTEMPT(single task). + +32. We also compare the memory footprint using T5-3B with batch size of 2. We set the length of the prompt to 100. + +As shown in Table 9, ATTEMPT increase the memory footprint from other methods, due to the increase of the input length, multiple pre-loaded source prompts and attention calculations. On the other hand, ATTEMPT shows moderate memory footprint increase when the backbone LM size gets larger (13.7 GB to 16.1 GB) while Adapter and Bit-Fit show about three times more memory footprints than T5-base. This demonstrates that ATTEMPT is more parameter-efficient and can be more memory-efficient when the backbone LMs get even larger (e.g., 11 billions). Moreover, Lester et al. (2021) show that the input prompt length can be significantly reduced when the backbone LMs get larger, which further improve the memory efficiency of prompt tuning-based methods. \ No newline at end of file diff --git a/attemptparameterefficientmultitasktuningviaattentionalmixturesofsoftprompts/images.zip b/attemptparameterefficientmultitasktuningviaattentionalmixturesofsoftprompts/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..4ba3c169044d6e2aad8439e70a007caac98c283a --- /dev/null +++ b/attemptparameterefficientmultitasktuningviaattentionalmixturesofsoftprompts/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:09ef0d18952d48e00014a42612c85765766659c39936234e4bc5a13449ac3ba1 +size 648311 diff --git a/attemptparameterefficientmultitasktuningviaattentionalmixturesofsoftprompts/layout.json b/attemptparameterefficientmultitasktuningviaattentionalmixturesofsoftprompts/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..575490a25955f96345d3a153a70d930b711ed3aa --- /dev/null +++ b/attemptparameterefficientmultitasktuningviaattentionalmixturesofsoftprompts/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b1456d57f283ddc32bc8007f6bb5d15e33bc725e66191bb793f7d8c02e8d499b +size 629952 diff --git a/aunifiedencoderdecoderframeworkwithentitymemory/7d6ad688-4808-4dde-83fe-7d8ae9d0a7fa_content_list.json b/aunifiedencoderdecoderframeworkwithentitymemory/7d6ad688-4808-4dde-83fe-7d8ae9d0a7fa_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..e6c83b512828b6b39be8ed01bd9a619bc987b661 --- /dev/null +++ b/aunifiedencoderdecoderframeworkwithentitymemory/7d6ad688-4808-4dde-83fe-7d8ae9d0a7fa_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8ca095411a8067c1ef98f5e2d28cb9410cea06e122d9376a0e7f09e7ad81f7e4 +size 109039 diff --git a/aunifiedencoderdecoderframeworkwithentitymemory/7d6ad688-4808-4dde-83fe-7d8ae9d0a7fa_model.json b/aunifiedencoderdecoderframeworkwithentitymemory/7d6ad688-4808-4dde-83fe-7d8ae9d0a7fa_model.json new file mode 100644 index 0000000000000000000000000000000000000000..bec428330edaaf95fb29a48facae6275c6a7a84d --- /dev/null +++ b/aunifiedencoderdecoderframeworkwithentitymemory/7d6ad688-4808-4dde-83fe-7d8ae9d0a7fa_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:64e4ee8eacc0be600bab85cafad48f206bd3a070124c90bdf2a7629bf90c8b18 +size 129096 diff --git a/aunifiedencoderdecoderframeworkwithentitymemory/7d6ad688-4808-4dde-83fe-7d8ae9d0a7fa_origin.pdf b/aunifiedencoderdecoderframeworkwithentitymemory/7d6ad688-4808-4dde-83fe-7d8ae9d0a7fa_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..afdaa8eb8503621b37b0fe9a81a6b395a0a4d5dd --- /dev/null +++ b/aunifiedencoderdecoderframeworkwithentitymemory/7d6ad688-4808-4dde-83fe-7d8ae9d0a7fa_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5d863cdea09caa050514a71ed681f06644e9f98ee92ac3a32c367a29a9287c7b +size 485594 diff --git a/aunifiedencoderdecoderframeworkwithentitymemory/full.md b/aunifiedencoderdecoderframeworkwithentitymemory/full.md new file mode 100644 index 0000000000000000000000000000000000000000..830859796af85c52f30deabe6cf9b4bd4199bf29 --- /dev/null +++ b/aunifiedencoderdecoderframeworkwithentitymemory/full.md @@ -0,0 +1,389 @@ +# A Unified Encoder-Decoder Framework with Entity Memory + +Zhihan Zhang $^{1}$ , Wenhao Yu $^{1}$ , Chenguang Zhu $^{2}$ , Meng Jiang $^{1}$ + +1University of Notre Dame, Notre Dame, IN, USA + +$^{2}$ Microsoft Cognitive Services Research, Redmond, WA, USA + +$^{1}$ zzhang23, wyu1, mjiang2@nd.edu; $^{2}$ chezhu@microsoft.com + +# Abstract + +Entities, as important carriers of real-world knowledge, play a key role in many NLP tasks. We focus on incorporating entity knowledge into an encoder-decoder framework for informative text generation. Existing approaches tried to index, retrieve, and read external documents as evidence, but they suffered from a large computational overhead. In this work, we propose an Encoder-Decoder framework with an entity Memory, namely EDMem. The entity knowledge is stored in the memory as latent representations, and the memory is pre-trained on Wikipedia along with encoder-decoder parameters. To precisely generate entity names, we design three decoding methods to constrain entity generation by linking entities in the memory. EDMem is a unified framework that can be used on various entity-intensive question answering and generation tasks. Extensive experimental results show that EDMem outperforms both memory-based auto-encoder models and non-memory encoder-decoder models. + +# 1 Introduction + +A large amount of real-world knowledge is related to entities, e.g., persons, nations, and events. Entity knowledge is the information describing facts and attributes related to entities. Many entity-intensive NLP tasks require models obtain entity knowledge to generate informative outputs, such as answering factual questions (Kwiatkowski et al., 2019), explaining claims (Onoe et al., 2021), or making informative conversations (Dinan et al., 2019). Pretrained encoder-decoder models can be directly applied on such entity-intensive tasks (Ye et al., 2020; Roberts et al., 2020), but their ability to store and use knowledge is still questionable (Lewis et al., 2021; Wang et al., 2021). A popular approach to incorporate knowledge into the generation process is retrieving evidence documents from external sources (Lewis et al., 2020b; Izacard and Grave, + +![](images/406a276db4da27e2486868653b2c87a20d42e035fc58bd5743ddfa14f324e5af.jpg) +Figure 1: An overview of the EDMem framework. H denotes the final hidden states of the encoder. + +2021; Oguz et al., 2020; Yu et al., 2022c). However, they suffer from significant computational overheads in indexing, retrieving, and reading a large number of extra documents (Lee et al., 2021; de Jong et al., 2022). Therefore, it is important to give encoder-decoder models access to entity knowledge without sacrificing too much efficiency. + +Recently it has been proposed to use an in-model memory to augment auto-encoder models with entity knowledge on entity linking tasks (Févry et al., 2020; Verga et al., 2021; Sun et al., 2021). The entity memory stores entity knowledge as dense vectors which can be directly incorporated into the hidden states of Transformer models (Vaswani et al., 2017), with no need to encode extra text. However, the auto-encoder framework in previous approaches can only select entities from a pre-defined entity vocabulary. Hence, they are not able to give an entity outside the vocabulary, nor to generate answers or text beyond a single entity. + +In this paper, we propose a novel Encoder-Decoder framework with an entity Memory (EDMem), as shown in Figure 1. EDMem is a unified framework on various entity-intensive QA and generation tasks, in which we train an entity memory for efficient knowledge incorporation. First, + +EDMem is pre-trained on Wikipedia documents, where it learns entity embeddings in the memory along with an encoder-decoder model. EDMem learns to select relevant entities from the memory via an entity linking objective, and learns to generate answers using entity knowledge via a language modeling objective. Second, to precisely generate entity names, we design three decoding methods that utilize the entity linking ability of EDMem in its generation process, when we fine-tune it on downstream tasks. These include (1) free-form: left-to-right generation with entity identifiers; (2) static entity linking: first select entities by entity linking, build prefix trees for the selected entities, and then perform constrained entity generation using the trees; (3) dynamic entity linking: select entities on-the-fly for constrained entity generation. + +We conduct experiments on two popular testbeds of entity knowledge: open-domain QA and entity-intensive generation. With the incorporation of entity knowledge, EDMem outperforms non-memory encoder-decoder models on both tasks, and it retains the efficiency advantage of closed-book (i.e., non-retrieval) models. Compared to memory-based auto-encoders, EDMem achieves both higher overall accuracy $(+9\%)$ and better entity precision $(+8\%)$ on open-domain QA datasets, and it generates high-quality text from the memory-supported decoder on generation datasets when auto-encoders fail to do so. To summarize, EDMem is the first knowledge-augmented closed-book framework to perform both tasks in a unified manner. + +# 2 Related Work + +Closed-Book Models Closed-book models are pre-trained models that store knowledge in their own parameters. For example, COMET (Bosselut et al., 2019) fine-tuned GPT2 (Radford et al., 2018) to construct knowledge graphs by generating commonsense triples. Recently, fine-tuned BART (Lewis et al., 2020a) or T5 (Raffel et al., 2020) models are proved to be competitive on open-domain QA (Ye et al., 2020; Roberts et al., 2020). Therefore, closed-book models are able to memorize some entity knowledge after pre-trained on massive data. However, studies showed that closed-book models just recalled similar inputs and answers in their pre-training corpus (Wang et al., 2021), and their performances were behind open-book models. + +Open-Book Models Open-book models first retrieve evidence documents from external corpora and read these documents to predict an answer (Chen et al., 2017). REALM (Guu et al., 2020) proposed a self-supervised approach to pretrain a retriever-reader model. DPR (Karpukhin et al., 2020) devised a contrastive objective to train a dense bi-encoder retriever on open-domain QA. Subsequent approaches combined DPR with a generative objective to build large, powerful models on open-domain QA and generation tasks (Lewis et al., 2020b; Izacard and Grave, 2021; Sachan et al., 2021; Yu et al., 2022a). However, open-book models have to process the raw text of all retrieved documents, which leads to extremely long inference time. Besides, additional overheads are brought by loading the document index and retrieving evidence documents for each example. + +Entity Memory EaE (Févry et al., 2020) was the first to pre-train an entity memory with an autoencoder framework to perform entity prediction on open-domain QA. FILM (Verga et al., 2021) followed EaE and added a fact memory containing representations of Wikidata triples. To better encode relational knowledge, OPQL (Sun et al., 2021) learned latent relational representations for arbitrary entity pairs. Recent work focused on learning a huge mention-level memory (~150M entries) with extensive pre-training (de Jong et al., 2022) or leveraging the entity memory in domain adaptive training (Kang et al., 2022). These models are all based on an auto-encoder framework. Thus, they are able to predict entities IDs but would fail to generate any non-entity answers or sentences. There is a preprint paper contemporaneous to our work which trained a memory with an encoder-decoder model (Chen et al., 2022). However, it used QA pairs as memory entries instead of entities, limiting its application to QA tasks. Besides, their memory is much heavier (60M entries) than ours (1M). + +# 3 Proposed Framework + +Suppose we have a pre-defined vocabulary of $N$ entities $\mathcal{E} = \{e_1,\dots ,e_N\}$ . A mention is the actual tokens in context which refer to an entity. The set of all mentions in the corpus is denoted as $\mathcal{M}$ . Thus, there is a global alias table $\mathcal{T}:\mathcal{E}\to 2^{\mathcal{M}}$ , where each entity is mapped to all its mentions. The input of EDMem is a sequence of tokens $\pmb{x}$ of length $S$ , and the target output is another sequence $\pmb{y} = [y_{1},\dots ,y_{T}]$ of length $T$ . Both sequences + +contain a pre-labeled set of mentions. Each mention refers to an entity in $\mathcal{E}$ . We add two special tokens $[E_s]$ and $[E_e]$ to represent "entity start" and "entity end" boundaries of a mention, e.g., "[ $E_s$ ] Brett Hart $[E_e]$ is the president of the $[E_s]$ United Airlines $[E_e]$ ". These special tokens come from either Wikipedia hyperlinks (in pre-training, §3.3) or an entity linking model (in fine-tuning, §3.4). + +# 3.1 Architecture + +An overview of EDMem is presented in Figure 1. The framework has a transformer encoder, a transformer decoder, an entity memory, and two prediction heads. Both the encoder and decoder have two parts: $(L_{1} \times)$ lower layers and $(L_{2} \times)$ upper layers. Transformer layers in EDMem have the same architecture with BART (Lewis et al., 2020a). At the end of lower layers, EDMem is allowed to use the hidden states as a query to access the entity memory. The knowledge representation obtained by each memory access is summed and normalized with the hidden states before performing further reasoning in upper layers. Two prediction heads use the final hidden states of the decoder for prediction: an LM head for token prediction and an entity linking head for entity prediction (Details are in §3.3). In practice, we follow EaE (Févry et al., 2020) to set $L_{1} = 4$ and $L_{2} = 8$ . + +# 3.2 Entity Memory + +The entity memory contains a large embedding table, which stores the embeddings of entities in $\mathcal{E}$ . Intuitively, an entity embedding contains the contextual information around all mentions of the entity in Wikipedia documents. During encoding and decoding, EDMem queries the entity memory whenever it encounters a mention. It recognizes mentions by identifying the $[E_s]$ token. EDMem takes the hidden state of the $[E_s]$ token as query to retrieve relevant knowledge from the entity memory by attending to the entity embedding table (bias terms are omitted): + +$$ +\mathbf {h} _ {s} ^ {\text {e n t}} = \mathbf {W} _ {\text {o u t}} \left(\sum_ {i = 1} ^ {N} \alpha_ {i} \cdot \mathbf {e} _ {i}\right), \tag {1} +$$ + +$$ +\text {w h e r e} \alpha_ {i} = \frac {\exp \left(\mathbf {e} _ {i} ^ {\top} \mathbf {W} _ {i n} \mathbf {h} _ {s} ^ {l o w}\right)}{\sum_ {j = 1} ^ {N} \exp \left(\mathbf {e} _ {j} ^ {\top} \mathbf {W} _ {i n} \mathbf {h} _ {s} ^ {l o w}\right)}. \tag {2} +$$ + +$\mathbf{e}_i$ is the embedding of entity $e_i$ . $\mathbf{h}_s^{low}$ denotes the hidden state of the $[E_s]$ token (from lower encoder/decoder layers). $\mathbf{h}_s^{ent}$ is the aggregated entity representation, which is summed and normalized + +with $\mathbf{h}_s^{low}$ to put into upper layers. $\mathbf{W}_{in}$ and $\mathbf{W}_{out}$ are linear projection layers for dimension matching. Following EaE, during inference, we aggregate the entity representation of top 100 entities (sorted by $\alpha_{i}$ ) instead of attending to all $N$ entities. + +# 3.3 Pre-Training + +# 3.3.1 Pre-Training Corpus + +We pre-train EDMem on the whole Wikipedia corpus. All documents are split into 128-token passages. In addition, we set a 10-token sliding window between passages to avoid an entity being split into two adjacent chunks. Such a setting yields a total of 39M passages, of which we hold out $0.5\%$ of them as the validation set during pre-training. We leverage Wikipedia hyperlinks as gold annotations of 249M mentions and their linked entities. Since hyperlinks do not cover all mentions in text, we heuristically label missing mentions to create more training signals for the entity memory. We use the alias table $\mathcal{T}$ to label all mentions in a Wikipedia page if they match either (1) a linked entity in the same page, or (2) the title entity of this page. This leads to a total of 468M mentions in the pre-training corpus. We collect 1M most frequently linked entities to form the entity vocabulary $\mathcal{E}$ . More details can be found in Appendix A. + +# 3.3.2 Pre-Training Objective + +Our pre-training objective is a combination of language modeling and entity linking. For language modeling objectives, we randomly corrupt parts of the input sequence and train EDMem to reconstruct the original sequence. We adopt two kinds of sequence corruption: random token masking and salient span masking. In random token masking, each token has a probability of $P_{rtm}$ to be replaced by a [MASK] token. Salient span masking is adapted from (Guu et al., 2020), where each mention has a probability of $P_{ssm}$ that all tokens within the mention are replaced by [MASK]. Such explicit masking of whole mention names encourages EDMem to rely on the entity memory in predicting mentions, which facilitates the learning of entity embeddings. The LM head performs token prediction through a linear-softmax layer, and the LM loss is the negative log-likelihood of the target sequence: $L_{LM} = -\sum_{j=1}^{T} \log P(y_j | x, y_{1:j-1})$ . + +EDMem utilizes direct supervision signals to the entity memory for entity representation learning. The entity linking loss is applied each time it queries the entity memory. Besides in the middle of + +![](images/085fdde1471c77b98db69ae8ebe84851b9a0a5fc517b49dd8c85cde893134c71.jpg) +Figure 2: Three decoding methods in downstream tasks. + +the encoder and decoder, EDMem queries the memory in the entity linking head, as shown in Figure 1. The entity linking head predicts the corresponding entity using the hidden states of each mention, the same as Equation (2). We use a cross-entropy loss to maximize the attention weights of the labelled entities: $L_{EL} = -\sum_{m}\log \alpha_{i}$ , where $m$ is a mention in the input or output sequence that is linked to the $i$ -th entity in $\mathcal{E}$ . The final loss function is $L_{LM} + \lambda_{EL}L_{EL}$ , where the coefficient $\lambda_{EL}$ is a hyper-parameter. + +# 3.4 Fine-Tuning + +EDMem is fine-tuned on downstream tasks via an LM objective and an entity linking objective. The LM objective is to maximize the probability of the task-specific output. The entity linking objective links mentions to entities in the memory, the same as pre-training. Mention boundaries are pre-labeled using an state-of-the-art entity linking model (Li et al., 2020). In entity-intensive downstream tasks, the entity memory assists sequence generation by not only providing entity knowledge but also generating entity names. Thus, we design three decoding settings to let the entity linking objective assist sequence generation. A sketch of different settings is given in Figure 2. + +Free-Form Generation In this setting, the model generates the output sequence entirely based on the + +probability given by the LM head. This includes the special tokens $[E_s]$ and $[E_e]$ which indicate an access to the memory. There is no constraint on what tokens to generate between $[E_s]$ and $[E_e]$ , i.e., the subsequence $[E_s], y_i, \dots, y_j, [E_e]$ may not be a valid entity name in the entity vocabulary. One advantage is that the model processes the entity knowledge in a latent manner, which does not explicitly affect the probability distribution of the language model. However, this may affect the model's performance in tasks where entity names are strictly required, e.g., open-domain QA tasks where exact match is used as evaluation. + +Static Entity Linking Static entity linking explicitly restricts the model to generate entity names for QA. Here, the decoding process is divided into two steps: entity linking and constrained generation. First, given a question, the model selects one or multiple entities as references. As shown in Figure 2(b), the question with an appended $[E_s]$ token as a placeholder is passed into the decoder, and the entity linking head is trained to predict the entity ID of the gold answer1. Then we have the selected top- $k$ entities for each test question. We restrict the generation space to the top- $k$ entities when the model is trying to generate an entity name. To achieve this, inspired by (Cao et al., 2021), we build a prefix tree for $k$ entities for each test example. The prefix tree tells the model which tokens are allowed to generate given a prefix (i.e., previous generated tokens). When the model generates an $[E_s]$ token, we restrict the following generated tokens to be one of the $k$ entity names (i.e., one of the paths in the prefix tree). In this way, the model can either generate an entity answer (by generating $[E_s]$ and traversing the pre-built prefix tree), or generate a non-entity answer (if no $[E_s]$ token is generated). Readers can refer to (Cao et al., 2021) for more implementation details. + +Dynamic Entity Linking Static entity linking is applicable only when the downstream task can be converted into an entity linking objective. Another way to generate entities is to predict the entities on-the-fly. After each time the model generates an $[E_s]$ token, the entity linking head predicts top- $k$ entities using the hidden state of $[E_s]$ based on previous generated tokens, as shown in Figure 2(c). This differs from static entity linking, where the model makes entity predictions solely dependent + +on the input sequence. A prefix tree of the names of top- $k$ entities is also built on-the-fly for constrained entity generation. + +# 4 Experiments + +We test our EDMem framework on two testbeds of entity knowledge: open-domain QA and entity-intensive generation tasks. + +# 4.1 Open-Domain QA + +# 4.1.1 Data + +Open-domain QA is a task where models are required to answer questions without any provided evidence. Questions are usually related to real-world facts and entities. We test EDMem on three popular datasets: Natural Questions (NQ) (Kwiatkowski et al., 2019), TriviaQA (TQA) (Joshi et al., 2017) and WebQuestions (WQ) (Berant et al., 2013). We follow the in-house splits introduced by (Lee et al., 2019). We also report on dev set of the TQA official split to compare with EaE (Févry et al., 2020). We report exact match (EM) scores on these datasets. + +We mainly compare with previous closed-book models (i.e., models without evidence retrieval), including traditional encoder-decoder models like BART (Lewis et al., 2020a) and T5 (Raffel et al., 2020), and memory-based auto-encoder models like RELIC (Ling et al., 2020), EaE, and FILM (Verga et al., 2021). Besides, We pre-train two ablations of EDMem. EncMem is composed of an encoder and an entity memory, and is trained via the same objectives as EDMem. EncDec removes the entity memory from EDMem, and is trained via the same LM objectives. We also list the performance of state-of-the-art open-book models (i.e., models with evidence retrieval to assist prediction) for reference, such as REALM (Guu et al., 2020), RAG (Lewis et al., 2020b), and FiD (Izacard and Grave, 2021). We test three variants of EDMem, i.e., free-form generation (-free), static entity linking (-stat.) and dynamic entity linking (-dyn.). + +# 4.1.2 Results + +Experimental results on open-domain QA datasets are listed in Table 1. With the same architecture, EncDec outperforms BART due to the additional salient span masking pre-training. Memory-based auto-encoder models like EaE and EncMem perform entity linking to provide answers. They outperform traditional encoder-decoder models by a large margin on TQA and WQ. However, target an + +
ModelTQANQWQ
In-House TestDevTestTest
Closed-Book Models
BART-Large*25.0227.2824.8229.23
T5-Large*-28.7028.5030.60
EncDec*27.5430.0125.9629.38
RELIC†35.70---
EaE†-43.20-39.00
FILM†29.10---
EncMem†41.0142.0025.5438.88
EDMem-free42.2443.3129.1436.47
EDMem-stat.46.1947.2330.1941.44
EDMem-dyn.43.8244.4427.7039.52
Open-Book Models
REALM--40.4040.70
RAG56.80-44.5045.20
FiD67.60-51.4047.64
+ +Table 1: Exact match scores on open-domain QA datasets. **Bold scores and underlined scores are the best and second best results among closed-book models. (*traditional encoder-decoder models, †memory-based auto-encoder models) + +svers are mainly entities on both datasets2. While on NQ where there are fewer entity answers, the performance of EncMem is similar to BART-Large. Compared to baselines, the free-form EDMem already outperforms memory-based auto-encoder models and traditional encoder-decoder models on TQA and NQ. EDMem-static and EDMemdynamic explicitly copy entity names into the generated answers, which further improves EDMem's performance, especially on TQA and WQ datasets where a larger portion of answers are entities. Overall, EDMem improves the best of closed-book baselines by $9\% / 6\% / 6\%$ on TQA/NQ/WQ, respectively. Although closed-book models are still behind openbook models in general, our approach shows that by combining the merits of encoder-decoder models like BART and entity linking models like EaE, the performance of EDMem is getting closer to openbook approaches and even outscores the open-book model REALM on WQ. + +# 4.1.3 Entity/Non-Entity Answers + +To further investigate the improvements of EDMem over previous closed-book models, we calculate EM scores on two subsets divided w.r.t. the answer type (i.e., entity answers and non-entity answers). + +
ModelTQANQWQ
In-House TestOfficial DevTestTest
TotalEnt.Non-Ent.TotalEnt.Non-Ent.TotalEnt.Non-Ent.TotalEnt.Non-Ent.
BART25.0227.929.3127.2830.529.7024.8228.5415.9529.2332.285.24
EaE---43.2051.430.00---39.0042.860.00
EncMem41.0148.580.0042.0049.740.0025.5436.240.0038.8843.820.00
EDMem-free42.2448.279.5943.3149.4410.1029.1434.3216.1936.4740.167.42
EDMem-stat.46.1953.824.8847.2354.865.8530.1937.3813.0441.4446.263.49
EDMem-dyn.43.8250.646.8144.4451.108.2927.7033.4114.0739.5244.093.49
+ +If an answer can be directly linked to a Wikipedia entity according to Google's SLING (Ringgaard et al., 2017) phrase table, it is counted as an entity answer, otherwise a non-entity answer. As shown in Table 2, as an entity linking model, EaE cannot predict non-entity answers; and as an encoder-decoder model, BART is able to generate a portion of non-entity answers while its accuracy on entity answers is much lower than EaE due to the lack of entity knowledge. EDMem incorporates entity knowledge into an encoder-decoder model, making it competitive in both entity answers and non-entity answers. However, the free-form generation variant is not as accurate on entity answers as EaE, because it may generate any form of answers while EaE always predicts a valid entity name. The entity linking variants, on the other hand, remedy this issue by setting constraints on generating entity names, either statically or dynamically. Such approaches improve the model performance on entity answers although sacrificing some performance on non-entity ones, and achieve the best overall performance on all answers. Besides, the best setting of EDMem outperforms EaE in entity answers as well, presumably due to a larger number of transformer layers trained in the encoder-decoder architecture, compared to its auto-encoder counterpart. + +# 4.1.4 Inference Efficiency + +We compare the time efficiency of different models during inference in Table 3. We run EDMem, BART and the open-book model FiD on the test set of TQA (11K questions) with $8 \times \mathrm{V}100$ GPUs. Compared to BART, EDMem needs to access a large entity memory multiple times, which slows down the inference time from 10s to 28s. However, such a time cost is much smaller than the gap between closed-book models and the open-book FiD (85min). In the open-book setting, the model needs to (1) load the pre-computed index from disk to + +Table 2: Exact match scores on entity answers ("Ent.") and non-entity answers ("Non-Ent.") in open-domain QA. + +
TypeModel\(T_{ind}\)\(T_{ret}\)\(T_{pred}\)EM
Closed-bookBART0017s25.02
EDMem-free0028s42.24
EDMem-dyn.0048s43.82
EDMem-stat.0059s46.19
Open-bookFiD29min15min41min67.60
+ +Table 3: Inference time on TriviaQA test set. $T_{ind}$ is the time for loading the index, which is a fixed amount of time; $T_{ret}$ and $T_{pred}$ denote time for document retrieval and answer prediction, which will linearly increase as the number of test examples increases. Inference time of EDMem-stat. is the accumulation of the entity linking step and the constrained generation step. + +the RAM, (2) retrieve evidence documents from the index, and (3) read all evidence documents to generate an answer. In addition to the overhead caused by accessing the index, the model needs to encode the raw text of all evidence documents (i.e., 100 documents for FiD) before generating an answer with the decoder, but EDMem and BART only needs to encode the question itself. Thus, EDMem is able to achieve significant improvement over traditional encoder-decoder models while retaining the efficiency advantage of closed-book models. + +# 4.1.5 Size of Entity Memory + +We compare the performance of EDMem and its auto-encoder variant EncMem based on different sizes of the entity memory. We randomly mask out entities from the original 1M vocabulary and re-train the model. Embeddings of masked entities do not participate in computing attention while accessing the memory. According to the curves in Figure 3, due to EDMem's ability of closed-book generation, it is less sensitive to the size of the entity memory, resulting in a smaller slope when less entities are visible. Particularly, EDMem is still able to generate many correct answers even + +
DatasetModelROUGE-1ROUGE-2ROUGE-LF1BERTScoreEntity TotalCoverage Unseen
MSMARCOBART-Large56.7237.6253.2653.8689.3443.8722.40
EDMem-free57.6739.4554.4555.0889.4045.5325.33
EDMem-dyn.55.9637.4252.8452.9488.4951.2828.49
FiD (open-book)60.5442.9657.2258.1279.7949.6332.58
CREAKBART-Large32.8714.9330.3430.2081.2146.8714.98
EDMem-free33.8116.4931.7831.3486.1149.0616.65
EDMem-dyn.32.7015.7530.6830.3285.7649.9018.76
FiD (open-book)35.9618.1133.5433.5781.0251.0221.09
ELI5BART-Large25.875.8922.9919.3881.4529.1814.83
EDMem-free27.145.7023.2420.1980.7838.7618.61
EDMem-dyn.27.487.1423.9720.6681.1745.6623.31
FiD (open-book)25.065.9822.1218.4880.9628.0616.96
WoWBART-Large19.523.4217.2215.3781.9211.784.05
EDMem-free18.923.5016.5215.2883.1916.716.83
EDMem-dyn.19.544.0017.0115.5183.2417.297.81
FiD (open-book)22.726.4320.1618.4881.0617.9111.99
+ +Table 4: Results on entity-intensive generation datasets. Bold scores are best results among closed-book models. + +![](images/41109a9108e5a85681bf2694521986105f70bf716852b51634b767bef610e550.jpg) +Figure 3: TQA performance of EDMem and EncMem on different memory sizes. + +when we remove the whole memory. In contrast, EncMem can only predict random entities when the entire memory is masked, which leads to a score close to zero. These results show the advantage of encoder-decoder models over auto-encoder models when jointly trained with an entity memory, especially on low-resource scenarios. + +In addition, we also illustrate the performance trend of EDMem on entity answers and non-entity answers in TQA. When all entities are masked, the model deteriorates to its non-memory variant EncDec. As more entity knowledge are available, EDMem performs better in predicting entity answers, while its generation performance remains consistent. These results show the advantage of memory-based EDMem over traditional encoder-decoder models on entity-intensive task comes from the incorporation of entity knowledge from the entity memory. + +![](images/6854615592d378451fe74c6facb62199fa7b6ce16b4663fad164144e040b2db3.jpg) +Figure 4: TQA performance of EDMem on entity and non-entity answers, trained with different memory sizes. + +# 4.2 Entity-Intensive Generation + +# 4.2.1 Data + +To test EDMem's ability in generating longer sentences with entities, we perform experiments on several generation datasets with rich entity mentions. We choose Wizard of Wikipedia (WoW) (Dinan et al., 2019) for knowledge-aware dialogue, MSMARCO-NLGen (Nguyen et al., 2016) and ELI5 (Fan et al., 2019) for abstractive question answering, and CREAM (Onoe et al., 2021) for claim explanation3. For MSMARCO and CREAM, we report results on the official dev set while holding out $10\%$ training data for validation. For ELI5, to keep a reasonable density of entities, we filter a subset of 85K data where the input and output are + +
ModelMSMARCOCREAKELI5WoW
Flu.↑Rel.↑Cor.↑Flu.↑Rel.↑Cor.↑Flu.↑Info.↓Rea.↓Flu.↑Info.↓Rea.↓
BART-Large2.901.931.572.971.821.632.772.262.342.702.452.51
EDMem-free2.902.352.382.992.352.052.861.972.062.831.862.14
EDMem-dynamic2.872.452.492.972.322.042.861.751.922.801.752.03
Human------2.752.012.192.681.661.86
+ +Table 5: Human evaluation results. "Flu.", "Rel", "Cor." stand for fluency, relevance and correctness, of which scores are given on a 1-3 scale. "Info." and "Rea." stand for informativeness and reasonability, of which 4 sentences are ranked #1 - #4. ↑ indicates higher values are better results. ↓ indicates lower values are better results. We run paired sample $t$ -test comparing EDMem with BART. Bold scores indicate significant difference with $p$ -value $< 0.01$ , and underlined scores indicate $p$ -value $< 0.05$ . + +both no longer than 75 tokens. Detailed dataset settings are provided in Appendix D.2. + +We report ROUGE (Lin, 2004) and unigram F1 scores, as well as BERTScore (Zhang et al., 2020a) for semantic-based evaluation. We also include metrics on evaluating entity generation. Given the entities in the ground-truth as reference, we calculate the coverage ratio of reference entities in the model-generated output. We also consider the mentions of these entities as correct matches, according to the alias table $\mathcal{T}$ . To avoid cases where entities in the output can be directly copied from the input4, we report the coverage ratio of unseen entities, i.e., entities in the ground-truth output that do not exist in the input. + +# 4.2.2 Results + +Auto-encoder models like EaE are not applicable on these datasets, thus we compare our EDMem to the traditional encoder-decoder model BART. As shown in Table 4, the free-form EDMem outperforms BART on both reference-based metrics (ROUGE, F1, BERTScore) and entity coverage scores. This indicates that the entity knowledge in the entity memory helps generate sentences with desired entities and correct entity-related information. Since these datasets cannot be directly converted to an entity linking setting, EDMem-static is not applicable here. The dynamic entity linking variant outperforms the free-form variant and BART in entity coverage scores on all datasets, while it does not sacrifice much language fluency in reference-based metrics. We find that both EDMem variants outscore BART on entity coverage by a large margin (up to $56\%$ on overall and up to $93\%$ on unseen), which indicates much stronger + +ability of EDMem models in entity generation. + +# 4.2.3 Human Evaluation + +To test whether the model generations are reasonable to humans, we leverage Amazon's MTurk platform to conduct human evaluation. For each dataset, we sample 50 data examples with generations from BART and EDMem. We ask three annotators to evaluate each example on fluency and two knowledge-related metrics. For CREAMARCO, knowledge-related metrics are topic relevance and factual correctness, given the ground-truth as reference. For ELI5 and WoW, since the ground-truth is not the only possible answer to the context, we use informativeness and reasonability as knowledge-related metrics, and we also evaluate the human-written answers. Detailed descriptions of these metrics are in Appendix F. When evaluating informativeness and reasonability, annotators are asked to rank these generations from $\#1 - \#4$ , thus lower rankings indicate better results. + +As shown in Table 5, EDMem generates more informative and factually correct sentences, compared to BART which lacks knowledge incorporation from the entity memory. Besides, such knowledge incorporation does not harm the fluency of model generations. In 3 out of 4 datasets, EDMemdynamic achieves the best results on knowledge-based metrics. This indicates that integrating entity linking with text generation is beneficial for generating informative sentences with rich entity knowledge. Interestingly, annotators even prefer EDMem's generations over human answers on ELI5. One possible reason is that human answers are usually longer than model-generated ones, so not all clauses are closely related to the question. Also, the quality of some Reddit responses (i.e., the source of ELI5 data) may not be reliable. + +
Claim: Chicago Symphony Orchestra started in Indiana. This is false because _____________
Ground truth: Chicago is in Illinois so it did not start in Indiana.
BART: It was not started here.
EDMem-free: The [Es] Chicago Symphony Orchestra [Ee] started in [Es] Utah [Ee]. +Attended entities: “Illinois”, “Chicago”, “Cook County, Illinois”, “Wisconsin”, “United States”
EDMem-dynamic: [Es] Chicago Symphony Orchestra [Ee] was founded in [Es] Illinois [Ee]. +Attended entities: “Illinois”, “Cook County, Illinois”, “Chicago”, “Wisconsin”, “United States”
+ +# 4.2.4 Case Study + +In Table 6, we show an example from CREAM with generations of different models. Without knowledge augmentation, BART fails to generate an informative explanation on why the starting place of the orchestra is not Indiana. Although EDMem-free steps closer to the correct explanation, it falsely predicts that the orchestra started in Utah. However, "Utah" does not exist in the top-5 linked entities during memory access. After we constrain the generation space of EDMem-dynamic to the top-5 predicted entities, "Utah" is no longer valid to generate, and the model finds "Illinois" as the correct location. Examples from other datasets can be found in Appendix G. + +# 4.2.5 Impact of Entity Richness on Generation Improvement + +In Table 7, we show detailed ROUGE-L scores according to the number of entity mentions in the ground-truth. Examples with larger numbers of mentions require more entity knowledge to generate. We list scores for CREAM dataset where the outputs are short factual claims, and ELI5 dataset where the outputs are long and diverse answers. In both datasets, the improvement of EDMem over BART occurs on entity-rich generations. For examples which do not need entity knowledge to solve (contain 0 mentions), there is not much difference between the performance of two models. This further demonstrates the effectiveness of incorporating knowledge from the entity memory on entity-intensive generation tasks. + +Table 6: Case study from the CREAM dataset. We list the top-5 entities that EDMem attends to when it generates the underlined entity. + +
#MentionsCREAKELI5
BARTEDMemBARTEDMem
08.898.8613.7514.28
126.2827.4715.9616.38
235.3337.1016.4117.50
334.6336.5517.9119.76
434.0534.7817.1419.42
5+30.8431.5618.8420.23
+ +Table 7: ROUGE-L scores based on different number of mentions in the ground-truth reference. + +# 5 Conclusions + +In this work, we proposed EDMem, an encoder-decoder framework with entity memory. The entity memory was pre-trained on Wikipedia to provide entity knowledge for the encoder-decoder model. EDMem also performed entity linking with the memory to assist entity generation in downstream tasks. As a unified framework, EDMem outperformed previous closed-book models on various entity-intensive QA and generation tasks, and still retained the efficiency advantage over open-book models. Further analysis proved that the proposed EDMem was enabled for entity linking with the entity memory and for generation with the encoder-decoder framework. + +# 6 Limitations + +First, if applying EDMem to other datasets, its performance may correlate to the density of entity mentions in data examples. EDMem may not be able to acquire sufficient entity knowledge from the memory if there are few mentions in the specific task. Another limitation of our work is that the pre-trained entity memory may not be generalized to special domains, e.g., biomedical text. A lot of specific terminology is not included in our pre-trained entity memory, which may require additional training on domain-specific corpora. + +# Acknowledgement + +This work was supported in part by NSF IIS-1849816, IIS-2119531, IIS-2137396, IIS-2142827, CCF-1901059, and ONR N00014-22-1-2507. We would like to thank Yuwei Fang (Microsoft), Jinfeng Lin (Meta), Mingxuan Ju (University of Notre Dame), and Qian Liu (Nanyang Technology University) for their valuable suggestions to this work. + +# References + +Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, EMNLP 2013. +Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli Celikyilmaz, and Yejin Choi. 2019. COMET: commonsense transformers for automatic knowledge graph construction. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019. +Nicola De Cao, Gautier Izacard, Sebastian Riedel, and Fabio Petroni. 2021. Autoregressive entity retrieval. In 9th International Conference on Learning Representations, ICLR 2021. +Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer open-domain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017. +Wenhu Chen, Pat Verga, Michiel de Jong, John Wieting, and William Cohen. 2022. Augmenting pre-trained language models with qa-memory for open-domain question answering. ArXiv preprint 2204.04581. +Michiel de Jong, Yury Zemlyanskiy, Nicholas FitzGerald, Fei Sha, and William Cohen. 2022. Mention memory: incorporating textual knowledge into transformers through entity mention attention. In International Conference on Learning Representations, ICLR 2022. +Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2019. Wizard of wikipedia: Knowledge-powered conversational agents. In 7th International Conference on Learning Representations, ICLR 2019. +Angela Fan, Yacine Jernite, Ethan Perez, David Grangier, Jason Weston, and Michael Auli. 2019. ELI5: long form question answering. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019. +Thibault Févry, Livio Baldini Soares, Nicholas FitzGerald, Eunsol Choi, and Tom Kwiatkowski. 2020. Entities as experts: Sparse memory access with entity supervision. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020. +Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Ming-Wei Chang. 2020. REALM: retrieval-augmented language model pre-training. ArXiv preprint 2002.08909. +Gautier Izacard and Edouard Grave. 2021. Leveraging passage retrieval with generative models for open domain question answering. In Proceedings of the 16th + +Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, EACL 2021. +Mandar Joshi, Eunsol Choi, Daniel S. Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017. +Minki Kang, Jinheon Baek, and Sung Ju Hwang. 2022. KALA: knowledge-augmented language model adaptation. ArXiv preprint 2204.10555. +Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick S. H. Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020. +Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur P. Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: a benchmark for question answering research. Trans. Assoc. Comput. Linguistics. +Jinhyuk Lee, Mujeen Sung, Jaewoo Kang, and Danqi Chen. 2021. Learning dense representations of phrases at scale. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021. +Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019. +Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020a. BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020. +Patrick S. H. Lewis, Ethan Perez, Aleksandra Pik-tus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kuttler, Mike Lewis, Wen-tau Yih, Tim Roektaschel, Sebastian Riedel, and Douwe Kiela. 2020b. Retrieval-augmented generation for knowledge-intensive NLP tasks. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020. + +Patrick S. H. Lewis, Pontus Stenetorp, and Sebastian Riedel. 2021. Question and answer test-train overlap in open-domain question answering datasets. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, EACL 2021. +Belinda Z. Li, Sewon Min, Srinivasan Iyer, Yashar Mehdad, and Wen-tau Yih. 2020. Efficient one-pass end-to-end entity linking for questions. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020. +Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74-81. +Jeffrey Ling, Nicholas FitzGerald, Zifei Shan, Livio Baldini Soares, Thibault Fevry, David Weiss, and Tom Kwiatkowski. 2020. Learning cross-context entity representations from text. *ArXiv preprint* 2001.03765. +Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In 7th International Conference on Learning Representations, ICLR 2019. +Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory F. Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, and Hao Wu. 2018. Mixed precision training. In 6th International Conference on Learning Representations, ICLR 2018. +Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A human generated machine reading comprehension dataset. In Proceedings of the Workshop on Cognitive Computation: Integrating neural and symbolic approaches 2016 co-located with the 30th Annual Conference on Neural Information Processing Systems (NIPS 2016). +Barlas Oguz, Xilun Chen, Vladimir Karpukhin, Stan Peshterliev, Dmytro Okhonko, Michael Schlichtkrull, Sonal Gupta, Yashar Mehdad, and Scott Yih. 2020. Unik-qa: Unified representations of structured and unstructured knowledge for open-domain question answering. ArXiv preprint 2012.14610. +Yasumasa Onoe, Michael J. Q. Zhang, Eunsol Choi, and Greg Durrett. 2021. CREAM: A dataset for commonsense reasoning over entity knowledge. In Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1, NeurIPS Datasets and Benchmarks 2021. +Fabio Petroni, Aleksandra Piktus, Angela Fan, Patrick S. H. Lewis, Majid Yazdani, Nicola De Cao, James Thorne, Yacine Jernite, Vladimir Karpukhin, Jean Maillard, Vassilis Plachouras, Tim Rocktäschel, and Sebastian Riedel. 2021. KILT: a benchmark for knowledge intensive language tasks. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021. + +Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2018. Language models are unsupervised multitask learners. Technical Report, OpenAI. +Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res. +Michael Ringgaard, Rahul Gupta, and Fernando C. N. Pereira. 2017. SLING: A framework for frame semantic parsing. ArXiv preprint 1710.07032. +Adam Roberts, Colin Raffel, and Noam Shazeer. 2020. How much knowledge can you pack into the parameters of a language model? In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020. +Devendra Singh Sachan, Siva Reddy, William L. Hamilton, Chris Dyer, and Dani Yogatama. 2021. End-to-end training of multi-document reader and retriever for open-domain question answering. In Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021. +Haitian Sun, Patrick Verga, Bhuwan Dhingra, Ruslan Salakhutdinov, and William W. Cohen. 2021. Reasoning over virtual knowledge bases with open predicate relations. In Proceedings of the 38th International Conference on Machine Learning, ICML 2021. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, NeurIPS 2017. +Pat Verga, Haitian Sun, Livio Baldini Soares, and William W. Cohen. 2021. Adaptable and interpretable neural memoryover symbolic knowledge. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021. +Cunxiang Wang, Pai Liu, and Yue Zhang. 2021. Can generative pre-trained language models serve as knowledge bases for closed-book qa? In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021. +Qinyuan Ye, Belinda Z. Li, Sinong Wang, Benjamin Bolte, Hao Ma, Wen-tau Yih, Xiang Ren, and Madian Khabsa. 2020. Studying strategically: Learning to mask for closed-book QA. ArXiv preprint 2012.15856. + +Donghan Yu, Chenguang Zhu, Yuwei Fang, Wenhao Yu, Shuohang Wang, Yichong Xu, Xiang Ren, Yiming Yang, and Michael Zeng. 2022a. Kg-fid: Infusing knowledge graph in fusion-in-decoder for open-domain question answering. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, ACL 2022. +Wenhao Yu, Dan Iter, Shuohang Wang, Yichong Xu, Mingxuan Ju, Soumya Sanyal, Chenguang Zhu, Michael Zeng, and Meng Jiang. 2022b. Generate rather than retrieve: Large language models are strong context generators. ArXiv preprint arXiv:2209.10063. +Wenhao Yu, Chenguang Zhu, Zaitang Li, Zhiting Hu, Qingyun Wang, Heng Ji, and Meng Jiang. 2022c. A survey of knowledge-enhanced text generation. ACM Computing Surveys (CSUR). +Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020a. Bertscore: Eval + +uating text generation with BERT. In 8th International Conference on Learning Representations, ICLR 2020. +Zhihan Zhang, Xiubo Geng, Tao Qin, Yunfang Wu, and Daxin Jiang. 2021. Knowledge-aware procedural text understanding with multi-stage training. In WWW '21: The Web Conference 2021. +Zhihan Zhang, Zhiyi Yin, Shuhuai Ren, Xinhang Li, and Shicheng Li. 2020b. DCA: diversified co-attention towards informative live video commenting. In *Natural Language Processing and Chinese Computing - 9th CCF International Conference*, NLPCC 2020. +Zhihan Zhang, Wenhao Yu, Mengxia Yu, Zhichun Guo, and Meng Jiang. 2022. A survey of multi-task learning in natural language processing: Regarding task relatedness and training methods. ArXiv preprint 2204.03508. + +# A Pre-Training + +# A.1 Pre-Training Data + +We pre-train our model on the Wikipedia corpus of over 5 million documents. All documents are split into 128-token passages. The last passage is round up to 128 tokens by appending tokens from the beginning of the same document, so there are no cross-document passages. In addition, we set a 10-token sliding window between passages to avoid an entity being split into two adjacent chunks. Such a setting yields a total of 39 million passages, of which we hold out $0.5\%$ of them as the validation set during the pre-training process. For supervision signals on the entity memory, we leverage Wikipedia hyperlinks as gold annotations. Each hyperlink provides the boundaries of a mention, and also the corresponding entity5 that the mention is linked to. + +However, the average density of Wikipedia hyperlinks is only one in 21 words, which means 6 mentions per passage. This is because in a specific page, (1) only the first mention of an entity is linked and (2) the title entity is not linked since a hyperlink always redirects to a different page. To provide more supervision signals for entity embedding learning, we label the missing mentions using heuristic rules. We use the alias table $\mathcal{T}$ to label all mentions in a Wikipedia page if they match either (1) a linked entity in the same page, or (2) the title entity of this page. After such heuristic labeling, the hyperlink density increases to one in 11 words, with a passage having 12 mentions on average. We manually checked 50 passages and found the precision of such heuristic labeling to be $92\%$ , a pretty acceptable rate. + +# A.2 Pre-Training Settings + +We pre-train our model on the Wikipedia corpus containing 39 million passages for 1 million steps using a batch size of 2048. AdamW (Loshchilov and Hutter, 2019) optimizer is used with maximal learning rate $1 \times 10^{-4}$ and the weight decay coefficient is 0.01. The learning rate is scheduled to be warmed up for $10\%$ of the training steps and then linearly decay. The mask rate for random token masking is $P_{rtm} = 0.3$ , and the mask rate for salient span masking is $P_{ssm} = 0.5$ (ablations in Appendix E.1). The maximum length of input + +sequence is set to 128. The coefficient of the entity linking objective is set to $\lambda_{EL} = 1.0$ and the dropout rate is 0.1. The whole model is trained from scratch. We tried to initialize the encoder-decoder model with BART (Lewis et al., 2020a) and derive entity embeddings from BART embeddings, but the model showed up to be unstable in further training. We use the mixed precision floating point arithmetic (Micikevicius et al., 2018) to speed-up training. The full setting of EDMem takes about two weeks to train on $16\times \mathrm{A}100$ GPUs. + +# B Fine-Tuning + +Different from pre-training on Wikipedia, in open-domain QA and generation tasks, there are no gold annotations of mention boundaries in the input and output. Therefore, we annotate mention boundaries as well as the linked entities using a state-of-the-art neural entity linker ELQ (Li et al., 2020). For generation datasets, we pass the source sequence and the target sequence into the ELQ model respectively, and obtain their mention annotations. For open-domain QA datasets, since the answers are usually short, we concatenate the question and the answer as input to the ELQ model. During fine-tuning, we tune the hyperparameters within the following ranges: learning rate $\in$ {5e-6, 1e-5, 2e-5, 3e-5}, $\lambda_{EL} \in$ {0.5, 1.0, 2.0}, dropout rate $\in$ {0.1, 0.2, 0.3}, beam size $\in$ {1, 3, 5}, #candidate entities (in static/dynamic entity linking) $\in$ {1, 3, 5}. They are tuned based on on the main evaluation metric of the specific task (open-domain QA: EM; WoW: F1; other generation datasets: ROUGE-L) on the dev set. Batch size is fixed to 256 unless it exceeds GPU memory or the dataset is too small (e.g., CBREAK and WQ). Early stopping is used with 20 waiting steps on the dev set. Fine-tuning EDMem usually costs a few hours (e.g., ~3 hours on TQA) on $8 \times \mathrm{V}100$ GPUs. + +# C Entity Memory Settings + +We collect the 1-million most frequent entities in Wikipedia documents as our entity vocabulary $\mathcal{E}$ . The frequency of an entity is calculated based on how many hyperlinks are linked to the Wikipedia page of that entity. The dimension of entity embeddings learned in the memory is set to 256. The model attends to all 1 million entities during training. During inference, top-100 entities are selected according to the dot product similarity, and we only + +
DatasetTrainDevTest
TQA (In-House)78,7858,83711,313
TQA (Official)87,62211,313Not Used
NQ79,1688,7573,610
WQ3,4173612,032
+ +Table 8: Statistics of open-domain QA datasets. + +
DatasetTrainDevTest
MSMARCO138,35215,37312,467
CREAK9,1581,0181,371
ELI575,2208,3611,384
WoW54,3303,0542,944
+ +Table 9: Statistics of generation datasets. + +integrate the embeddings of these 100 entities when performing attention. + +# D Datasets + +# D.1 Open-Domain QA Datasets + +Statistics of the open-domain QA datasets are listed in Table 8. In TQA, most previous works used the in-house split provided by (Lee et al., 2019), while we also test EDMem on the official dev set to compare with the scores reported by EaE. + +# D.2 Generation Datasets + +Here we provide detailed descriptions of the generation datasets used in our experiments. Statistics of these datasets are listed in Table 9. + +MSMARCO MSMARCO (Nguyen et al., 2016) is originally collected for the abstractive QA task. We use the NLGen split where answers are sentences carefully written by human workers. Since the official leaderboard has been closed, we test our model on the official dev set and hold out $10\%$ training examples for validation. + +Creak Creak (Onoe et al., 2021) is a recent dataset for claim verification and explanation. In our experiments, we target on the explanation subtask. The model is given a factual claim and a true/false judgment, and is expected to generate an explanation about why the claim is true or false. Since the official test set does not have explanations, we report results on its dev set and hold out $10\%$ of the training set for validation. + +ELI5 ELI5 (Fan et al., 2019) is a dataset for generating long-form responses for factual questions. + +To keep a reasonable density of entities, we filter a subset where the input question and the output response are both no longer than 75 tokens. We also remove those examples which have no entity mentions in the output. This result in a total of 85K data examples. + +Wizard of Wikipedia (WoW) WoW (Dinan et al., 2019) is a dialogue dataset where entity knowledge is included in speakers' responses. We use the open-domain setting of this dataset provided by the KILT benchmark (Petroni et al., 2021), where no candidate knowledge pieces are given to make the response. We additionally remove the training examples where no knowledge piece is used to generate the response. + +# E Additional Experiments + +# E.1 Pre-Training Mask Rates + +We test the performance of EDMem on different mask rates during pre-training, and list the results in Table 10. In pre-training, we adopt two masked language modeling objectives: random token masking (RTM) and salient span masking (SSM). When using smaller mask rates, there is more visible contextual information to the model. Therefore, when evaluating the model on the validation set during pre-training, smaller mask rates lead to lower language model perplexity and better entity linking performances. However, larger mask rates finally lead to better performances on the downstream task. With larger mask rates, more training signals are applied to the model and thus the model is more sufficiently trained. Specifically, in SSM, the model is encouraged to leverage the entity knowledge from the entity memory to predict the masked mention. Therefore, a larger SSM rate leads to more sufficient learning of the entity memory, where the contextual information of masked mentions is integrated into the corresponding entity embedding. + +# E.2 Pre-Training Steps + +We fine-tune EDMem on TQA using pre-trained checkpoints of different number of training steps. As is shown in Figure 5, longer pre-training leads to better performance on the downstream task. Although there is no sign of overfitting, as the learning rate gradually decays and the model gradually converges, there is not much improvement of the model performance after 500K steps. + +
PrtmPssmPre-TrainTQA
PPL↓ACC↑EM↑Entity↑
0.10.31.1473.3341.0146.86
0.20.41.3172.9441.5247.03
0.30.51.5171.3742.2448.27
+ +Table 10: Performance of EDMem with different pretraining mask rates. PPL: perplexity of the masked tokens on validation set (lower is better); ACC: entity linking accuracy of masked mention spans on validation set; EM: overall exact match scores; Entity: exact match scores on entity answers. + +![](images/32b34e8bf81cfb52c4f5d17f4d97119776a78cdd0a5b7c0c7cd24d80fe03673c.jpg) +Figure 5: TQA performance of EDMem with different pre-training steps. + +# F Human Evaluation Details + +Here we provide the actual questions that we asked Amazon Mturk annotators in human evaluation, along with their rubrics. For fluency, relevance and correctness metrics, annotators are asked to give scores on a 1-3 scale. For informativeness and reasonability, ranking evaluation is applied. This is because in ELI5 and WoW datasets, the human-written answer is not the only possible one to the context, so we do not compare model generations to the ground-truth. Instead, we let annotators evaluate the human written answer along with the model-generated ones. Since informativeness and reasonability are hard to set clear rubrics if no ground-truth is given, we adopt a ranking-based evaluation. The annotator is asked to rank all sequences (three model generations and the human-written answer) from $\# 1 - \# 4$ , with lower rankings indicating better results. + +- Fluency: How is the fluency of the machine-generated explanation? (Do not consider its correctness) + +3- Fluent English +2- Readable with grammar errors or typos +$\diamond 1$ -Not fluent at all + +- Relevance: Does the machine-generated explanation contain the same concepts as the human-written reference? (Synonyms are allowed, and do not consider its factual correctness) + +3-Contains the same concepts as the reference +2 - Misses some concepts in the reference, or contains redundant concepts +$\diamond 1-$ Does not contain any concept in the reference + +- Correctness: Does the machine-generated explanation express similar meanings with the human-written reference? (Paraphrases are allowed) + +3-Expresses similar meanings with the reference +2-Expresses partial meanings of the reference +1- Expresses totally different meanings with the reference + +- Informativeness: Rank these answers based on the amount of information they contain. #1: most informative, #4: least informative. Ties are allowed (e.g., 1/1/3/4 or 1/2/2/2). You do not need to consider whether the information is relevant to the question. +- Reasonability: Rank these answers based on whether they are reasonable answers to the question. #1: most reasonable, #4: least reasonable. Ties are allowed (e.g., 1/1/3/4 or 1/2/2/2). + +# G Case Study + +We present an example on TQA with a non-entity answer in Table 11. We use our auto-encoder variant EncMem to represent memory-based autoencoder models. We also provide additional examples on generation datasets in Tables 12 - 14. + +
Question: At the equator, in miles per hour, what speed of the ground beneath your feet, as a result of the Earth's rotation?
Ground truth: [18,000 mph, eighteen thousand speed, 18,000 speed]
BART: 8,000 EncMem: Speed EDMem-free: 18,000 mph EDMem-static: 18,000 speed
+ +Table 11: Case study of an example with a non-entity answer from the TQA dataset. Entity mentions in the question are underlined. Since the auto-encoder model EncMem can only perform entity linking to provide an answer, it selects an entity "Speed" from the entity memory. In contrast, EDMem has the ability of generating non-entity answers thanks to the encoder-decoder framework, and successfully predicts the correct amount of speed. + +
Question: Where is Niagara?
Ground truth: Niagara is in New York, United States.
BART: Niagara is in Ontario, Canada.
EDMem-free: [Es] Niagara [Ee] is in [Es] Niagara County, Pennsylvania [Ee], [Es] United States [Ee]. +Attended entities: “Niagara County, New York”, “Regional Municipality of Niagara”, “Niagara Falls, New York”, “Niagara Falls”, “Wayne County, New York”
EDMem-dynamic: [Es] Niagara Falls [Ee] is in [Es] Niagara County, New York [Ee], [Es] United States [Ee]. +Attended entities: “Niagara County, New York”, “Niagara Falls”, “Regional Municipality of Niagara”, “New York (state)”, “Upstate New York”
+ +Table 12: Case study from the MSMARCO dataset. We list the top-5 entities that EDMem attends to when it generates the underlined entity. EDMem-free correctly predicts the county name and country name of Niagara. However, the state name "Pennsylvania" is not expected. Although the model generates $[E_s]$ and $[E_e]$ tokens around "Niagara County, Pennsylvania", it is not a valid entity name which leads to false information. In contrast, EDMem-dynamic generates the correct answer by constraining the model to generate "Niagara County, New York", one of the top-5 selected entity names. + +
Question: The history of Canada
Human-Written Answer: Canada was initially settled by France and later Britain invaded. Later some of the Southern British colonies rebelled against the King but the Northern colonies chose not to. A hundred years after that Southern rebellion the Northern colonies were granted their independence peacefully.
BART: Canada is a country of about 5 million people. It's a very large country, and it's been a part of the world for a long time.
EDMem-free: [Es] Canada [Ee] is a [Es] country [Ee] in the [Es] North America [Ee]. It was a [Es] colony [Ee] of [Es] English [Ee] and [Es] French [Ee]. +Attended entities: “Great Britain”, “United Kingdom”, “England”, “British Empire”, “British people”
EDMem-dynamic: [Es] Canada [Ee] is a country in the [Es] Americas [Ee]. It was a part of the [Es] British Empire [Ee], which was a [Es] nation [Ee] of [Es] Europe [Ee]. +Attended entities: “British Empire”, “Commonwealth of Nations”, “United Kingdom”, “Canada”, “United States”
+ +Table 13: Case study from the ELI5 dataset. We list the top-5 entities that EDMem attends to when it generates the underlined entity. BART generates a fluent sentence about Canada. However, it barely contains any useful information. Canada's population is much larger than 5 million, and the whole generation is not related to the history of Canada. The generation of EDMem-free is more informative than BART, but when it is supposed to generate country names, it generates "English" and "French", which hurts its performance. EDMem-dynamic avoids this problem since "English" is not in the top attended entities. It is able to generate a more informative answer by correctly selecting entities from the memory. + +
Dialogue: - I was a cheerleader in high school. Were you ever a cheer leader? +- Yeah! Remember back then, we do encourage our college basketball team to success. +- What is your favorite cheer leading move? I loved the pyramid.
Human-written Response: Yeah I loved the components of tumbling, dance, jumps, cheers, and stunting.
BART: I love the pyramid. It is a large, circular structure that is used to hold the flag.
EDMem-free: I like the [Es] hula hoop [Ee]. It is a [Es] dance [Ee] performed by [Es] hula hoops [Ee]. +Attended entities: “Haka”, “Human pyramid”, “Jumping”, “Push-up”, “Tumbling (gymnastics)”
EDMem-dynamic: I love the [Es] human pyramid [Ee]. [Es] Cheerleading [Ee] is a form of [Es] performance art [Ee] that combines [Es] gymnastics [Ee] and [Es] dance [Ee]. +Attended entities: “Human pyramid”, “Jumping”, “Haka”, “Cheering”, “Cheerleading”
+ +Table 14: Case study from the WoW dataset. We list the top-5 entities that EDMem attends to when it generates the underlined entity. The second generated sentence of BART does not correlate with the context, and does not contain any correct information either. The generation of EDMem-free talks about the hula hoop, but it is not a typical cheerleading move. EDMem-dynamic generates a reasonable response by first selecting a correct cheerleading move "human pyramid", and then continues to generate an informative sentence by selecting relevant entities from the memory. \ No newline at end of file diff --git a/aunifiedencoderdecoderframeworkwithentitymemory/images.zip b/aunifiedencoderdecoderframeworkwithentitymemory/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..394cbae44b19f8d8e27004fae3251540fc1d75cd --- /dev/null +++ b/aunifiedencoderdecoderframeworkwithentitymemory/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:be2d9709f4ff65ba5be7eb9a35b561467d3070cf4bcb78fb8debf3361dab5e7a +size 819460 diff --git a/aunifiedencoderdecoderframeworkwithentitymemory/layout.json b/aunifiedencoderdecoderframeworkwithentitymemory/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..2b014f8944eafa109996439b63a7cbf781e73325 --- /dev/null +++ b/aunifiedencoderdecoderframeworkwithentitymemory/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a52e513a90cefc572614b38ae50d2c2087c89d5e075b100f52d3c3cb48eb67c7 +size 474634 diff --git a/aunifiedneuralnetworkmodelforreadabilityassessmentwithfeatureprojectionandlengthbalancedloss/42bdbf83-34ab-465b-89f0-840b6aae54d6_content_list.json b/aunifiedneuralnetworkmodelforreadabilityassessmentwithfeatureprojectionandlengthbalancedloss/42bdbf83-34ab-465b-89f0-840b6aae54d6_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..98696665e105b88cbbb50d1cf09433c8183f7d58 --- /dev/null +++ b/aunifiedneuralnetworkmodelforreadabilityassessmentwithfeatureprojectionandlengthbalancedloss/42bdbf83-34ab-465b-89f0-840b6aae54d6_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5eabe5aaf9cd856c6803b0221e8ff189c2189093f6a7f73bebf9fbf5f9c17fcf +size 87363 diff --git a/aunifiedneuralnetworkmodelforreadabilityassessmentwithfeatureprojectionandlengthbalancedloss/42bdbf83-34ab-465b-89f0-840b6aae54d6_model.json b/aunifiedneuralnetworkmodelforreadabilityassessmentwithfeatureprojectionandlengthbalancedloss/42bdbf83-34ab-465b-89f0-840b6aae54d6_model.json new file mode 100644 index 0000000000000000000000000000000000000000..2b3efef08ccbc2b1c1db741350ace9b7d3aa6c79 --- /dev/null +++ b/aunifiedneuralnetworkmodelforreadabilityassessmentwithfeatureprojectionandlengthbalancedloss/42bdbf83-34ab-465b-89f0-840b6aae54d6_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d42d8d90a04a11071edb832118890144e1ca55e82925883a33e6d5c0239bd51f +size 103824 diff --git a/aunifiedneuralnetworkmodelforreadabilityassessmentwithfeatureprojectionandlengthbalancedloss/42bdbf83-34ab-465b-89f0-840b6aae54d6_origin.pdf b/aunifiedneuralnetworkmodelforreadabilityassessmentwithfeatureprojectionandlengthbalancedloss/42bdbf83-34ab-465b-89f0-840b6aae54d6_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..86c1575688acce1d55559518047b80f364e5c7da --- /dev/null +++ b/aunifiedneuralnetworkmodelforreadabilityassessmentwithfeatureprojectionandlengthbalancedloss/42bdbf83-34ab-465b-89f0-840b6aae54d6_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:479dc5dc0953aa7b3a3973de267881eaf8d256b769e5ec5f0220f982b84f508b +size 3944130 diff --git a/aunifiedneuralnetworkmodelforreadabilityassessmentwithfeatureprojectionandlengthbalancedloss/full.md b/aunifiedneuralnetworkmodelforreadabilityassessmentwithfeatureprojectionandlengthbalancedloss/full.md new file mode 100644 index 0000000000000000000000000000000000000000..866f167f2c86f0620a310b16035fb1e8474c072e --- /dev/null +++ b/aunifiedneuralnetworkmodelforreadabilityassessmentwithfeatureprojectionandlengthbalancedloss/full.md @@ -0,0 +1,414 @@ +# A Unified Neural Network Model for Readability Assessment with Feature Projection and Length-Balanced Loss + +Wenbiao Li $^{1,2}$ , Ziyang Wang $^{1,2}$ , Yunfang Wu $^{1,3*}$ + +$^{1}$ MOE Key Laboratory of Computational Linguistics, Peking University + +$^{2}$ School of Software and Microelectronics, Peking University, Beijing, China + +$^{3}$ School of Computer Science, Peking University, Beijing, China + +{liwb, wzy232303}@stu.pku.edu.cn, wuyf@pku.edu.cn + +# Abstract + +For readability assessment, traditional methods mainly employ machine learning classifiers with hundreds of linguistic features. Although the deep learning model has become the prominent approach for almost all NLP tasks, it is less explored for readability assessment. In this paper, we propose a BERT-based model with feature projection and length-balanced loss (BERTFP-LBL) for readability assessment. Specially, we present a new difficulty knowledge guided semi-supervised method to extract topic features to complement the traditional linguistic features. From the linguistic features, we employ projection filtering to extract orthogonal features to supplement BERT representations. Furthermore, we design a new length-balanced loss to handle the greatly varying length distribution of data. Our model achieves state-of-the-art performances on two English benchmark datasets and one dataset of Chinese textbooks, and also achieves the near-perfect accuracy of $99\%$ on one English dataset. Moreover, our proposed model obtains comparable results with human experts in consistency test. + +# 1 Introduction + +Readability assessment is to automatically determine the difficulty level of a given text, aiming to recommend suitable reading materials to readers. There are wide applications of readability assessment, such as automating readers' advisory (Pera and Ng, 2014), clinical informed consent forms (Perni et al., 2019) and internet-based patient education materials (Sare et al., 2020). + +Comparing with other natural language processing (NLP) tasks, readability assessment is less explored. In the early days, researchers exploit linguistic features to develop various readability formulas, such as Flesch (Flesch, 1948), Dale-Chall (Dale and Chall, 1948) and + +SMOG (Mc Laughlin, 1969). Later, the mainstream research (Deutsch et al., 2020; Hansen et al., 2021; Lee et al., 2021) is to employ machine learning models to classify a text, by designing a large number of linguistic features. There are also works that treat it as a regression task (Sheehan et al., 2010) or a ranking task (Lee and Vajjala, 2022). + +Recently, unlike other NLP tasks, the introduction of deep neural networks for readability assessment does not achieve overwhelming advantages over traditional machine learning methods. Employing neural network models for readability assessment, there are several challenges: + +(1) The scale of the dataset for readability assessment is small, which restricts the performance of deep neural network models. +(2) The deep neural network model is mainly based on characters or words and the extracted features are often at a shallow level. As a result, words with similar functions or meanings, such as "man" and "gentleman", are mapped into close vectors although their reading difficulties are different (Jiang et al., 2018). +(3) The linguistic features designed by researchers and continuous features extracted by neural network models are from two different semantic spaces. If two kinds of features are simply concatenated, it will bring redundant information or even harmful effects to model performance. +(4) Unlike other NLP data whose length follows a normal distribution, a notable problem with the data for readability assessment is that the text length varies greatly. The texts with low difficulty are usually shorter, while texts with high difficulty are usually longer. For example, as shown in Table 1, in ChineseLR the average length of Level 1 is only 266 characters, while the average length of Level 5 is 3,299 characters. As a result, when experimented with deep learning networks, shorter texts tend to converge much faster than those longer ones thus harm the overall performance. + +In order to solve the above problems, we propose a BERT-based model with Feature Projection and Length-Balanced Loss (BERT-FP-LBL). With the pre-trained BERT as the backbone, we employ feature projection to integrate linguistic features into the neural model, and design a new length-balanced loss function to guide the training. Concretely: + +- We leverage BERT and a mixed-pooling mechanism to obtain text representations, which take advantage of the powerful representative ability of pre-trained model, and thus overcome the data-sparsity problem. +- Beyond traditional features, we extract a set of topic features enriched with difficulty knowledge, which are high-level global features. Specifically, based on a graded lexicon, we exploit a clustering algorithm to group related words belonging to the same difficulty level, which then serve as anchor words to guide the training of a semi-supervised topic model. +- Rather than simple concatenation, we project linguistic features to the neural network features to obtain orthogonal features, which supplement the neural network representations. +- We introduce a new length-balanced loss function to revise the standard cross entropy loss, which balances the varying length distribution of data for readability assessment. + +We conduct experiments on three English benchmark datasets, including WeeBit (Vajjala and Meurers, 2012), OneStopEnglish (Vajjala and Lucic, 2018) and Cambridge (Xia et al., 2019), and one Chinese dataset collected from school textbooks. Experimental results show that our proposed model outperforms the baseline model by a wide margin, and achieves new state-of-the-art results on WeeBit and Cambridge. + +We also conduct test to measure the correlation coefficient between the BERT-FP-LBL model's inference results and three human experts, and the results demonstrate that our model achieves comparable results with human experts. + +# 2 Related Work + +Traditional Methods. Early research efforts focused on various linguistic features as defined by linguists. Researchers use these features to create various formulas for readability, including + +Flesch (Flesch, 1948), Dale-Chall (Dale and Chall, 1948) and SMOG (Mc Laughlin, 1969). Although the readability formula has the advantages of simplicity and objectivity, there are also some problems, such as the introduction of fewer variables during the development, and insufficient consideration of the variables at the discourse level. + +Machine Learning Classification Methods. (Schwarm and Ostendorf, 2005) develop a method of reading level assessment that uses support vector machines (SVMs) to combine features from statistical language models (LMs), parse trees, and other traditional features used in reading level assessment. Subsequently, (Petersen and Ostendorf, 2009) present expanded results for the SVM detectors. (Qiu et al., 2017) design 100 factors to systematically evaluate the impact of four levels of linguistic features (shallow, POS, syntactic, discourse) on predicting text difficulty for L1 Chinese learners and further selected 22 significant features with regression. (Lu et al., 2019) design experiments to analyze the influence of 88 linguistic features on sentence complexity and results suggest that the linguistic features can significantly improve the predictive performance with the highest of $70.78\%$ distance-1 adjacent accuracy. (Deutsch et al., 2020; Lee et al., 2021) evaluate the joint application of handcrafted linguistic features and deep neural network models. The handcrafted linguistic features are fused with the features of neural networks and fed into a machine learning model for classification. + +Neural Network Models. (Jiang et al., 2018) provide the knowledge-enriched word embedding (KEWE) for readability assessment, which encodes the knowledge on reading difficulty into the representation of words. (Azpiazu and Pera, 2019) present a multi-attentive recurrent neural network architecture for automatic multilingual readability assessment. This architecture considers raw words as its main input, but internally captures text structure and informs its word attention process using other syntax and morphology-related datapoints, known to be of great importance to readability. (Meng et al., 2020) propose a new and comprehensive framework which uses a hierarchical self-attention model to analyze document readability. (Qiu et al., 2021) form a correlation graph among features, which represent pairwise correlations between features as triplets with linguistic features as nodes and their correlations as edges. + +# 3 Methodology + +The overall structure of our model is illustrated in Figure 1. We integrate difficulty knowledge to extract topic features using the Anchored Correlation Explanation (CorEx) (Gallagher et al., 2017), and fuse linguistic features with neural network representations through projection filtering. Further, we propose a new length-balanced loss function to deal with the unbalanced length distribution of the readability assessment data. + +# 3.1 Traditional Features + +Many previous studies have proved that shallow and linguistic features are helpful for readability assessment. For Chinese traditional features, we develop a Chinese toolkit zhfeat to extract character, word, sentence and paragraph features. Please refer to Appendix A for detailed descriptions. For English traditional features, we extract discourse, syntactic, lexical and shallow features, by directly implementing the lingfeat (Lee et al., 2021) toolkit. We denote the traditional features as $f_{\alpha}$ . + +# 3.2 Topic Features with Difficulty Knowledge + +Background. Besides the above lexical and syntactic features, topic features provide high-level semantic information for assessing difficulty level. (Lee et al., 2021) also leverage topic features, but they train the topic model in a purely unsupervised way without considering difficulty knowledge. Inspired by the work of Anchored Correlation Explanation (CorEx) (Gallagher et al., 2017), which allows integrating domain knowledge through anchor words, we introduce word difficulty knowledge to guide the training of topic model, thus obtaining difficulty-aware topic features. + +First, we introduce the concept of information bottleneck (Tishby et al., 2000), which aims to achieve a trade-off between compressing feature $X$ into representation $Y$ and preserving as much information as possible with respect to the label $Z$ . Formally, the information bottleneck is expressed as: + +$$ +\max _ {p (y \mid x)} \xi I (Z: Y) - I (X: Y) \tag {1} +$$ + +$$ +\begin{array}{l} I \left(X _ {1}: X _ {2}\right) = H \left(X _ {1}\right) + H \left(X _ {2}\right) \tag {2} \\ - H (X 1, X 2) \\ \end{array} +$$ + +where $I(X_{1}:X_{2})$ is the mutual information of random variables $X_{1}$ and $X_{2}$ , $H(X)$ represents the + +entropy of the random variable $X$ , and $\xi$ represents the Lagrange multiplier. + +In CorEx, if we want to learn representations that are more relevant to specific keywords, we can anchor a word type $X_{i}$ to topic $Y_{j}$ , and control the strength of anchoring by constraining optimization $\xi \geq 1$ . The optimization objective is: + +$$ +\max _ {\xi_ {i, j}, p \left(y _ {i} \mid x\right)} \sum_ {j = 1} ^ {u} \left(\sum_ {i = 1} ^ {v} \xi_ {i, j} I \left(X _ {i}: Y _ {j}\right) - I \left(X: Y _ {j}\right)\right) \tag {3} +$$ + +where $u$ represents the number of topics, $v$ is the number of words corresponding to the topic, and $\xi_{i,j}$ represents the anchoring strength of the word $i$ to the topic $j$ . + +Extracting difficulty-aware Topic Features. We utilize a lexicon containing words of varying difficulty levels to extract anchor words. Let $\Omega = \{L_1, L_2, \dots, L_k\}$ be a graded lexicon, where $L_i$ is the set of words with difficulty level $i$ . $\mathcal{C}$ is the corpus for pre-training the topic model. First, we select out some high frequent words of each level in the corpus $\mathcal{C}$ : + +$$ +W _ {i} = L _ {i} \cap \mathcal {C} \tag {4} +$$ + +where $\cap$ represents the intersection operation. + +For each level of words, we conduct KMeans clustering algorithm to do classification, and then remove isolated word categories (a single word is categorized as a class): + +$$ +W _ {i} ^ {a} = \operatorname {K M e a n s} (W _ {i}) \tag {5} +$$ + +The clustering result of words with the difficulty level $i$ is denoted as $W_{i}^{a} = \{\{w_{i11}^{a}, w_{i12}^{a}, \ldots\}, \{w_{i21}^{a}, w_{i22}^{a}, \ldots\}, \ldots\}$ . Thus, the final anchor words are: + +$$ +W ^ {a} = \left\{W _ {1} ^ {a}, W _ {2} ^ {a}, \dots , W _ {k} ^ {a} \right\} \tag {6} +$$ + +These anchor words of different difficulty levels serve as domain knowledge to guide the training of topic models: + +$$ +\mathbf {A T M} = \operatorname {C o r E x} \left(\mathcal {C}, \text {a n c h o r s} = W ^ {a}\right) \tag {7} +$$ + +where ATM represents the anchored topic model. Then, we implement the ATM to obtain a set of topic distribution features involving difficulty information, which are denoted as $f_{\beta}$ . + +![](images/db8aaf4c4fa7e56c95641f35cc7d2a62d17739ed81efd8174b83eb0995d341eb.jpg) +Figure 1: The overall structure of our proposed model for readability assessment. CA represents the clustering algorithm. The input color and output color of the feature projection layer represent different types of features. + +Combining traditional and topic features, we obtain the overall linguistic features: + +$$ +f _ {\gamma} = f _ {\alpha} \oplus f _ {\beta} \tag {8} +$$ + +where $\oplus$ represents the splicing operation. + +# 3.3 Feature Fusion with Projection Filtering + +BERT Representation. We leverage the pretrained BERT model (Devlin et al., 2018) to obtain sentence representation. + +The length distribution of data for readability evaluation varies greatly, and texts with higher difficulty are very long, which might exceed the input limit of the model. Therefore, for an input text $S$ , we segment it as $S = (s_1, s_2, \dots, s_m)$ . For each segment, we exploit BERT to extract its semantic representation: $H_s = (h_{s_1}, h_{s_2}, \dots, h_{s_m})$ . + +Further, we adopt Mixed Pooling (Yu et al., 2014) to extract representations of the entire text: + +$$ +f _ {\eta} = \lambda \operatorname {M a x P o o l i n g} \left(H _ {s}\right) + \tag {9} +$$ + +$$ +(1 - \lambda) \operatorname {M e a n P o o l i n g} \left(H _ {s}\right) +$$ + +where $\lambda$ is a parameter to balance the ratio between max pooling and mean pooling. + +Projection Filtering. To obtain better performance, we try to combine BERT representations with linguistic features. As for the method of direct splicing, since two kinds of features come from + +different semantic spaces, not only will it introduce some repetitive information, but also it may bring contradictions between some features that will harm the performance. When performing feature fusion, our goal is to obtain additional orthogonal features to complement each other. Inspired by the work (Qin et al., 2020), which uses two identical encoders with different optimization objectives to extract common and differentiated features. Unlike this work, our artificial features and neural features are extracted in different ways, and our purpose is to perform feature complementation. Since the pre-trained model captures more semantic-level features through the contextual co-occurrence relationship between large-scale corpora. This is not enough for readability tasks, and the discrimination of difficulty requires some supplementary features (difficulty, syntax, etc.). So we consider the features extracted by BERT as primary features and linguistic features as secondary ones, and then the secondary features are projected into the primary features to obtain additional orthogonal features. + +Concretely, based on the linguistic features $f_{\gamma}$ and BERT representation $f_{\eta}$ , we perform dimensional transformation and project them into the same vector space: + +$$ +f _ {\gamma} = \tanh \left(\tanh \left(f _ {\gamma} \mathbf {W} _ {1} + \mathbf {b} _ {1}\right)\right) \mathbf {W} _ {3} + \mathbf {b} _ {3}) \tag {10} +$$ + +$$ +f _ {\eta} = \tanh \left(\tanh \left(f _ {\eta} \mathbf {W} _ {2} + \mathbf {b} _ {2}\right)\right) \mathbf {W} _ {3} + \mathbf {b} _ {3}) \tag {11} +$$ + +where $\mathbf{W}_1, \mathbf{W}_2$ and $\mathbf{W}_3$ are the trainable parameters, and $\mathbf{b}_1, \mathbf{b}_2$ and $\mathbf{b}_3$ are the scalar biases. + +Next, we project the secondary features into primary ones to obtain additional orthogonal features $f_{o}$ : + +$$ +f _ {o} = f _ {\gamma} - \frac {f _ {\gamma} \cdot f _ {\eta}}{\left| f _ {\eta} \right| ^ {2}} f _ {\eta} \tag {12} +$$ + +The orthogonal features are further added to the BERT representation to constitute the final text representation: + +$$ +f _ {\tau} = f _ {o} \oplus f _ {\eta} \tag {13} +$$ + +Finally, we compute the probability that a text belongs to the $i - th$ category by: + +$$ +p _ {i} = \operatorname {S o f t m a x} \left(f _ {\tau} \mathbf {W} _ {4} + \mathbf {b} _ {4}\right) \tag {14} +$$ + +where $\mathbf{W}_4$ is the trainable parameters, and $\mathbf{b}_4$ are scalar biases. + +# 3.4 Length Balanced Loss Function + +The text length is an important aspect for determining the reading difficulty level. As shown in Table 1, a text with high difficulty level generally contains more tokens than that of low level. For example, on Cambridge dataset, the average length of Level 1 is 141 tokens, while the average length of Level 5 is 751 tokens. When experimented with deep learning methods, texts with short length tend to converge much faster than the texts with long length that influences the final performance. + +To address this issue, we revise the loss to handle varying length. Specially, we measure the length distribution by weighting different length attributes, including the average, median, minimum and maximum length: + +$$ +\theta_ {i} = \sum_ {j = 1} ^ {4} \pi_ {i j}, i = 1, 2, \dots , N \tag {15} +$$ + +where $\theta_{i}$ represents the length value of the text category $i$ , $\pi_{i,1}, \pi_{i,2}, \pi_{i,3}$ and $\pi_{i,4}$ represent the average, median, minimum and maximum length of the $i-th$ text, respectively. $N$ is the total number of categories. + +We normalize the length value to obtain the length coefficient for each category: + +$$ +\kappa_ {i} = \frac {\theta_ {i}}{\sum_ {i = 1} ^ {N} \theta_ {i}} \tag {16} +$$ + +Accordingly, the final loss function for a single sample is defined as: + +$$ +\mathcal {L} = - \sum_ {i = 1} ^ {N} \kappa_ {i} ^ {\rho} y _ {i} \log \left(p _ {i}\right) \tag {17} +$$ + +where $y_{i}$ is the true label of text, $\rho$ is the adjustment factor of length distribution. When $\rho = 0$ , it is reduced to the traditional cross entropy loss. + +# 4 Experimental Setup + +# 4.1 Datasets + +To demonstrate the effectiveness of our proposed method, we conduct experiments on three English datasets and one Chinese dataset. We split the train, valid and test data according to the ratio of 8:1:1. The statistic distribution of datasets can be found in Table 1. + +WeeBit (Vajjala and Meurers, 2012) is often considered as the benchmark data for English readability assessment. It was originally created as an extension of the well-known Weekly Reader corpus. We downsample to 625 passages per class. + +OneStopEnglish (Vajjala and Lucic, 2018) is an aligned channel corpus developed for readability assessment and simplification research. Each text is paraphrased into three versions. + +Cambridge (Xia et al., 2019) is a dataset consisting of reading passages from the five main suite Cambridge English Exams (KET, PET, FCE, CAE, CPE). We downsample to 60 passages per class. + +ChineseLR. ChineseLR is a Chinese dataset that we collected from textbooks of middle and primary school of more than ten publishers. To suit our task, we delete poetry and traditional Chinese texts. Following the standards specified in the Chinese Curriculum Standards for Compulsory Education, we category all texts to five difficulty levels. + +# 4.2 Baseline Models + +SVM. We employ support vector machines as the traditional machine learning classifier. The input to the model is the linguistic feature $f_{\gamma}$ . We adopt MinMaxScaler (ranging from -1 to 1) for linguistic + +
DatasetWeeBitOneStopECambridgeChineseLR
LevelPassagesAvg.LengthPassagesAvg.LengthPassagesAvg.LengthPassagesAvg.Length
162515218953560141814266
2625189189678602711063679
36252951898256061711041140
462524200607637622165
562534700607514173299
All312524556767930050941601255
+ +features and use the RBF kernel function. We use the libsvm1 framework for experiments. + +BERT. We utilize $f_{\eta}$ in Equation 9 followed by a linear layer classifier as our BERT baseline model. + +# 4.3 Training and Evaluation Details + +For the selection of the difficulty level lexicon $\Omega$ , on the English dataset, we use the lexicon released by Maddela and Xu (2018), where we only use the first 4 levels. On the Chinese dataset, we use the Compulsory Education Vocabulary (Su, 2019). The word embedding features of English and Chinese word clustering algorithms are respectively used (Pennington et al., 2014) and (Song et al., 2018). We use the Wikipedia corpus $^2$ for pre-training the semi-supervised topic models. Please refer to Appendix B for some other details. + +We do experiments using the Pytorch (Paszke et al., 2019) framework. For training, we use the AdamW optimizer, the weight decay is 0.02 and the warm-up ratio is 0.1. The mixing pooling ratio $\lambda$ is set to 0.5. Other specific parameter settings are shown in Table 2. + +For evaluation, we calculate the accuracy, weighted F1 score, precision, recall and quadratic weighted kappa (QWK). We repeated each experiment three times and reported the average score. + +Table 1: Statistics of datasets for readability assessment. Avg.Length means the average tokens per passage. + +
DatasetBatchMaxLenEpochlrρ
WeeBit8512103e-50.8
OneStopE8500×2103e-50.4
Cambridge8500×2103e-50.6
ChineseLR2500×8103e-50.4
+ +Table 2: Part of the hyperparameter settings, where $500 \times n$ means to split the text into $n$ segments with a length of 500 tokens. + +# 5 Results and Analysis + +# 5.1 Overall Results + +The experimental results of all models are summarized in Table 3. First of all, it should be noted that there are only a few studies on readability assessment, and there is no unified standard for data division and experimental parameter configuration. This has led to large differences in the results of different research works. + +Our BERT-FP-LBL model achieves consistent improvements over the baselines on all four datasets, which validates the effectiveness of our proposed method. In terms of F1 metrics, our method improves WeeBit and ChineseLR by 1.66 and 3.7 compared to the baseline BERT model. Overall, our model achieves state-of-the-art performance on WeeBit and Cambridge. On OneStopEnglish, our model also achieves competitive results compared to previous work (Lee et al., 2021), also achieving near-perfect classification accuracy of $99\%$ . + +Comparing the experimental results of SVM and the base BERT, it can be observed that on Cambridge and ChineseLR, SVM outperforms BERT. We believe this benefits from the linguistic features of our design. + +# 5.2 Ablation Study + +To illustrate the contribution of each module in our model, we conduct ablation experiments on WeeBit and ChineseLR, and the results are reported in Table 4. + +When AW is removed, the CorEx changes from semi-supervised to unsupervised. The F1 scores of WeeBit and ChineseLR drop by 0.31 and 0.52, respectively, and when TFDK is removed, the corresponding F1 scores drop by 0.61 and 1.27, respectively. This indicates that our topic features incorporating difficulty knowledge indeed contribute to readability assessment. + +Furthermore, when FP is removed, as described in Section 3.3, the simple splice operation brings + +
DatasetMetricsQiu-2021Mar-2021Lee-2021SVMBERTBERT-FP-LBL
WeeBitAccuracy87.3285.7390.5079.3791.1192.70
F1-85.8190.5079.2791.0792.73
Precision-86.5890.5079.2691.4292.89
Recall-85.7390.4079.3791.1192.70
QWK-95.2796.8093.2297.3697.78
OneStopEAccuracy86.6178.7299.0089.4797.6699.42
F1-78.8899.5089.3297.6699.41
Precision-79.7799.5089.4197.8399.44
Recall-78.7299.6089.4797.6699.42
QWK-82.4599.6092.3192.9898.25
CambridgeAccuracy78.52-76.3083.3382.2287.78
F1--75.2083.4581.9787.73
Precision--79.2090.9182.9689.46
Recall--75.3083.3382.2287.78
QWK--91.9091.9794.6596.87
ChineseLRAccuracy---76.6775.1678.89
F1---76.5375.0578.75
Precision---76.4775.9579.43
Recall---76.6775.1678.89
QWK---90.6090.4091.63
+ +Table 3: Experimental results on both English and Chinese datasets for readability assessment. We compare our method with the recent three works, including Qiu-2021 (Qiu et al., 2021), Mar-2021 (Martinc et al., 2021) and Lee-2021 (Lee et al., 2021). + +
ModelWeeBitChineseLR
BERT-FP-LBL92.7378.75
-AW92.4278.23
-TFDK92.1277.48
-FP92.2578.27
-LBL91.7676.94
+ +Table 4: Ablation study in terms of F1 metric. -AW means to remove the anchor words. -TFDK means remove the difficulty-aware topic features. -FP means that the linguistic features and neural network features are directly spliced without using projection filtering. -LBL means training using the standard cross-entropy loss function $(\rho = 0)$ . + +some duplication or even negative information to the model. The F1 scores of WeeBit and ChineseLR both drop by 0.48. + +Finally, when LBL is removed, the F1 scores of WeeBit and ChineseLR drop by 0.97 and 1.81, respectively. We believe that the difference in the length distribution of the dataset affects the convergence speed of different categories, which in turn will have an impact on the results. Besides, the drop in F1 metric is much more severe on ChineseLR than on WeeBit, and this result can be attributed to the more severe length imbalance on ChineseLR as shown in Table 1. + +# 5.3 Analysis on the Length-balanced Loss + +To explore the effect of length-balanced loss, we set different $\rho$ to conduct experiments. The larger the $\rho$ is, the difference between the loss of different + +categories is bigger. The loss difference leads to different convergence rates. When $\rho$ is 0, the loss function is the standard cross entropy loss, and there is no difference in the loss contributed by different categories. The specific results are shown in Figure 2. + +For BERT, the optimal value of $\rho$ is relatively large, which means the model needs a relatively big difference in the loss to solve the problem of unbalanced text length. This indicates that there are indeed differences in the convergence speed between different classes, and this difference can be reduced by correcting the loss contributed by different classes. After adding orthogonal features, the optimal value of $\rho$ is relatively small. We think that whether the text is short or long, the number of parameters of its corresponding orthogonal features is fixed and does not require the length-balanced loss to adjust. So, when BERT features are combined with orthogonal features, the optimal value of $\rho$ will be lower than that in BERT alone. + +In addition, the optimal value of $\rho$ on WeeBit is 0.8, while the optimal value of $\rho$ on ChineseLR is 0.4. This is perhaps because the WeeBit dataset has a small span of length distribution (maximum 512 truncation), and we need to relatively amplify the differences between different categories. However, the length distribution of the ChineseLR dataset has a large span $(500\times 8)$ , and we need to relatively narrow the differences between different categories. + +Of course, the optimal value of $\rho$ is related to the + +![](images/153957a25c12e2bb717d9fa76da9a9b42a0056440705a8bcb78ff6c4f60b2849.jpg) +Figure 2: Influences of LBL on classification accuracy. + +![](images/200132bae3adfb1de1d40e8f221a7042d7c3d5dffdbc8bf1963580a9ed0f1a7c.jpg) + +specific data distribution, which is a parameter that needs to grid search. Generally speaking, when the length difference between different categories is small, we set $\rho$ relatively large, and when the length difference between different categories is large, we set $\rho$ relatively small. + +# 5.4 Analysis on the Difficulty-aware Topic Features + +To further explore the impact of topic features with domain knowledge, we visualize the traditional features $f_{\alpha}$ , difficulty-aware topic features $f_{\beta}$ and combining features $f_{\gamma}$ . Specifically, On WeeBit and ChineseLR, we randomly selected 50 samples from each level of 1, 3 and 5 for visualization, as shown in Figure 3 and 4. + +For texts of completely different difficulty, their traditional features are near in the latent space. This shows that traditional features pay more attention to semantic information rather than reading difficulty. By adding difficulty-aware topic features, texts of different difficulty are better differentiated. Further, the combination of two kinds of features achieves a better ability to distinguish reading difficulty. + +# 5.5 Consistency Test with Human Experts + +To judge the difficulty level of a text is also a hard task for humans, and so we conduct experiments to investigate how consistent the model's inference results are with human experts. We collected 200 texts from extracurricular reading materials, and hired three elementary school teachers to do double-blind labeling. Each text is required to annotate with an unique label $1/2/3$ , corresponding to the first/second/third level. + +Our model (denoted as M4) is regarded as a single expert that is equal to the other three human experts (E1/E2/E3). We calculate the Spearman correlation coefficient of annotation results between each pair, and report the results in Table 5. + +
RaterE1E2E3M4
E11.000---
E20.922**1.000--
E30.829**0.833**1.000-
M40.836**0.820**0.807**1.000
+ +Table 5: The Spearman correlation coefficient between four experts, where M4 is our model. $\star \star$ indicates a significant correlation at the 0.01 level (two-tailed). + +On the whole, there is a significant correlation at the 0.01 level between human experts (E1, E2 or E3) and our model. On the one hand, there is still a certain gap between the model and human experts (E1 and E2). On the other hand, the inference results of our model are comparable with the human expert E3. Especially, when E1 is adopted as the reference standard, the consistency of our model prediction is slightly higher than that of E3 (0.836 vs. 0.829). When E2 is regarded as the reference standard, the consistency of our model prediction is slightly lower than that of E3. + +Although there is no unified standard for the definition of "text difficulty", which relies heavily on the subject experiences of experts, our model achieves competitive results with human experts. + +# 6 Conclusions + +In this paper, we propose a unified neural network model BERT-FP-LBL for readability assessment. We extract difficulty-aware topic fea + +![](images/9742f342c08e9a499fb40f7e542f5f61e7c1cb1deb86e38a20a3d8eee4cec6e8.jpg) +Figure 3: Visualization of different kinds of features on WeeBit. + +![](images/837976fbf13e7b1382f6b7a72586a944a6a2a0fd65a038f0e783047dc13d288d.jpg) + +![](images/9bd644d608a8b66447bb59aa294d72ce3d79948475ff6f51c69f524a2a8ccf02.jpg) + +![](images/84d859b6858cbe7d6af99056cbfbdc085b3a433ef4ec2249f1e0c932ed2bd55e.jpg) +Figure 4: Visualization of different kinds of features on ChineseLR. + +![](images/c209ccd0e674bc2a4c67f027aac939b0acee92ce75ed748f612ced6a6dec630a.jpg) + +![](images/250183001bc74ac143ea99d87c0d0b1cfeae7ad3086306a056dfc89fbaeaf9a3.jpg) + +tures through the Anchored Correlation Explanation method, and fuse linguistic features with BERT representations via projection filtering. We propose a length-balanced loss to cope with the imbalance length distribution. We conduct extensive experiments and detailed analyses on both English and Chinese datasets. The results show that our method achieves state-of-the-art results on three datasets and near-perfect accuracy of $99\%$ on one English dataset. + +# Limitations + +From the perspective of experimental setup, there is no uniform standard for data division and experimental parameter configuration due to less research on readability assessment. This leads to large differences in the results of different studies (Qiu et al., 2021; Martinc et al., 2021; Lee et al., 2021), and the results of the corresponding experiments are not comparable. Therefore, objectively speaking, our comparison object is only the baseline model, which lacks a fair comparison with previous work. + +From the perspective of readability assessment task, since different datasets have different difficulty scales and different length distributions. In order to ensure the performance on the dataset as much as possible, our length-balanced loss parameters are mainly calculated according to the length distribution of the corresponding dataset, and it + +is impossible to transfer across datasets directly, which is also a major difficulty in this field. In cross-dataset and cross-language scenarios, there is a lack of a unified approach. Without new ways to deal with the difficulty scales of different datasets, or without large public datasets, developing a general readability assessment model will always be challenging. + +# Acknowledgement + +This work is supported by the National Natural Science Foundation of China (62076008), the Key Project of Natural Science Foundation of China (61936012) and the National Hi-Tech RD Program of China (No.2020AAA0106600). + +# References + +Ion Madrazo Azpiazu and Maria Soledad Pera. 2019. Multiattentive recurrent neural network architecture for multilingual readability assessment. Transactions of the Association for Computational Linguistics, 7:421-436. +Edgar Dale and Jeanne S Chall. 1948. A formula for predicting readability: Instructions. Educational research bulletin, pages 37-54. +Tovly Deutsch, Masoud Jasbi, and Stuart Shieber. 2020. Linguistic features for readability assessment. arXiv preprint arXiv:2006.00377. + +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. +Rudolph Flesch. 1948. A new readability yardstick. Journal of applied psychology, 32(3):221. +Ryan J Gallagher, Kyle Reing, David Kale, and Greg Ver Steeg. 2017. Anchored correlation explanation: Topic modeling with minimal domain knowledge. Transactions of the Association for Computational Linguistics, 5:529-542. +Hieronymus Hansen, Adam Widera, Johannes Ponge, and Bernd Hellingrath. 2021. Machine learning for readability assessment and text simplification in crisis communication: A systematic review. In Proceedings of the 54th Hawaii International Conference on System Sciences, page 2265. +Zhiwei Jiang, Qing Gu, Yafeng Yin, and Daoxu Chen. 2018. Enriching word embeddings with domain knowledge for readability assessment. In Proceedings of the 27th International Conference on Computational Linguistics, pages 366-378. +Bruce W Lee, Yoo Sung Jang, and Jason Hyung-Jong Lee. 2021. Pushing on text readability assessment: A transformer meets handcrafted linguistic features. arXiv preprint arXiv:2109.12258. +Justin Lee and Sowmya Vajjala. 2022. A neural pairwise ranking model for readability assessment. arXiv preprint arXiv:2203.07450. +Dawei Lu, Xinying Qiu, and Yi Cai. 2019. Sentence-level readability assessment for 12 Chinese learning. In Workshop on Chinese Lexical Semantics, pages 381-392. Springer. +Mounica Maddela and Wei Xu. 2018. A word-complexity lexicon and a neural readability ranking model for lexical simplification. arXiv preprint arXiv:1810.05754. +Matej Martinc, Senja Pollak, and Marko Robnik-Sikonja. 2021. Supervised and unsupervised neural approaches to text readability. Computational Linguistics, 47(1):141-179. +G Harry Mc Laughlin. 1969. Smog grading-a new readability formula. Journal of reading, 12(8):639-646. +Changping Meng, Muhao Chen, Jie Mao, and Jennifer Neville. 2020. Readnet: A hierarchical transformer framework for web article readability analysis. In European Conference on Information Retrieval, pages 33-49. Springer. +Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32. + +Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532-1543. +Maria Soledad Pera and Yiu-Kai Ng. 2014. Automating readers' advisory to make book recommendations for k-12 readers. In Proceedings of the 8th ACM Conference on Recommender Systems, pages 9-16. +Subha Perni, Michael K Rooney, David P Horowitz, Daniel W Golden, Anne R McCall, Andrew J Einstein, and Reshma Jagsi. 2019. Assessment of use, specificity, and readability of written clinical informed consent forms for patients with cancer undergoing radiotherapy. JAMA oncology, 5(8):e190260-e190260. +Sarah E Petersen and Mari Ostendorf. 2009. A machine learning approach to reading level assessment. Computer speech & language, 23(1):89-106. +Qi Qin, Wenpeng Hu, and Bing Liu. 2020. Feature projection for improved text classification. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8161-8171. +Xinying Qiu, Yuan Chen, Hanwu Chen, Jian-Yun Nie, Yuming Shen, and Dawei Lu. 2021. Learning syntactic dense embedding with correlation graph for automatic readability assessment. arXiv preprint arXiv:2107.04268. +Xinying Qiu, Kebin Deng, Likun Qiu, and Xin Wang. 2017. Exploring the impact of linguistic features for chinese readability assessment. In National CCF Conference on Natural Language Processing and Chinese Computing, pages 771-783. Springer. +Antony Sare, Aesha Patel, Pankti Kothari, Abhishek Kumar, Nitin Patel, and Pratik A Shukla. 2020. Readability assessment of internet-based patient education materials related to treatment options for benign prostatic hyperplasia. Academic Radiology, 27(11):1549-1554. +Sarah E Schwarm and Mari Ostendorf. 2005. Reading level assessment using support vector machines and statistical language models. In Proceedings of the 43rd annual meeting of the Association for Computational Linguistics (ACL'05), pages 523-530. +Kathleen M Sheehan, Irene Kostin, Yoko Futagi, and Michael Flor. 2010. Generating automated text complexity classifications that are aligned with targeted text complexity standards. ETS Research Report Series, 2010(2):i-44. +Yan Song, Shuming Shi, Jing Li, and Haisong Zhang. 2018. Directional skip-gram: Explicitly distinguishing left and right context for word embeddings. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 175–180. + +Xinchun Su. 2019. Compulsory education common vocabulary (draft). +Naftali Tishby, Fernando C Pereira, and William Bialek. 2000. The information bottleneck method. arXiv preprint physics/0004057. +Sowmya Vajjala and Ivana Lucic. 2018. Onestopenglish corpus: A new corpus for automatic readability assessment and text simplification. In Proceedings of the thirteenth workshop on innovative use of NLP for building educational applications, pages 297-304. +Sowmya Vajjala and Detmar Meurers. 2012. On improving the accuracy of readability classification using insights from second language acquisition. In Proceedings of the seventh workshop on building educational applications using NLP, pages 163-173. +Menglin Xia, Ekaterina Kochmar, and Ted Briscoe. 2019. Text readability assessment for second language learners. arXiv preprint arXiv:1906.07580. +Dingjun Yu, Hanli Wang, Peiqiu Chen, and Zhihua Wei. 2014. Mixed pooling for convolutional neural networks. In International conference on rough sets and knowledge technology, pages 364-375. Springer. + +# A Chinese Traditional Features + +
IdxDimFeature description
11Total number of characters
21Number of character types
31Type Token Ratio (TTR)
41Average number of strokes
51Weighted average number of strokes
625Number of characters with different strokes
725Proportion of characters with different strokes
81Average character frequency
91Weighted average character frequency
101Number of single characters
111Proportion of single characters
121Number of common characters
131Proportion of common characters
141Number of unregistered characters
151Proportion of unregistered characters
161Number of first-level characters
171Proportion of first-level characters
181Number of second-level characters
191Proportion of second-level characters
201Number of third-level characters
211Proportion of third-level characters
221Number of fourth-level characters
231Proportion of fourth-level characters
241Average character level
+ +Table 6: Character features description. + +
IdxDimFeature description
11Total number of words
21Number of word types
31Type Token Ratio (TTR)
41Average word length
51Weighted average word length
61Average word frequency
71Weighted average word frequency
81Number of single-character words
91Proportion of single-character words
101Number of two-character words
111Proportion of two-character words
121Number of three-character words
131Proportion of three-character words
141Number of four-character words
151Proportion of four-character words
161Number of multi-character words
171Proportion of multi-character words
181Number of idioms
191Number of single words
201Proportion of single words
211Number of unregistered words
221Proportion of unregistered words
231Number of first-level words
241Proportion of first-level words
251Number of second-level words
261Proportion of second-level words
271Number of third-level words
281Proportion of third-level words
291Number of fourth-level words
301Proportion of fourth-level words
311Average word level
3257Number of words with different POS
3357Proportion of words with different POS
+ +Table 7: Word features description. + +
IdxDimFeature description
11Total number of sentences
21Average characters in a sentence
31Average words in a sentence
41Maximum characters in a sentence
51Maximum words in a sentence
61Number of clauses
71Average characters in a clause
81Average words in a clause
91Maximum characters in a clause
101Maximum words in a clause
1130Sentence length distribution
121Average syntax tree height
131Maximum syntax tree height
141Syntax tree height <= 5 ratio
151Syntax tree height <= 10 ratio
161Syntax tree height <= 15 ratio
171Syntax tree height >= 16 ratio
1814Dependency distribution
+ +Table 8: Sentence features description. + +
IdxDimFeature description
11Total number of paragraphs
21Average characters in a paragraph
31Average words in a paragraph
41Maximum characters in a paragraph
51Maximum words in a paragraph
+ +# B Semi-supervised Topic Model Related Parameters + +Table 9: Sentence features description. + +
Pre-trainingEnglishChinese
Length range300~1000500~5000
Items209018180977
Topics120160
Anchor topics6080
Anchor strength45
First-level word anchor topics1615
Second-level word anchor topics2136
Third-level word anchor topics1418
Fourth-level word anchor topics911
+ +# C SVM model Related Hyperparameters + +Table 10: Details for pre-training the topic model. + +
Datasetcg
WeeBit320.004
OneStopE80.002
Cambridge160.004
ChineseLR640.032
+ +Table 11: SVM best parameters. + +The search range of parameter $c$ is [2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, 8192, 16384, 32768], and the search range of parameter $g$ is [0.002, 0.004, 0.008, 0.016, 0.032, 0.064, 0.128, 0.256, 0.512, 1.024, 2.048, 4.096]. \ No newline at end of file diff --git a/aunifiedneuralnetworkmodelforreadabilityassessmentwithfeatureprojectionandlengthbalancedloss/images.zip b/aunifiedneuralnetworkmodelforreadabilityassessmentwithfeatureprojectionandlengthbalancedloss/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..3183a5b5525bfb04b4d89f9e344212a0c43c6710 --- /dev/null +++ b/aunifiedneuralnetworkmodelforreadabilityassessmentwithfeatureprojectionandlengthbalancedloss/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:390a0e9c503a6ff812ba529b25d603e182864e3fb78c87b6afeb05ee4bcc7a77 +size 802934 diff --git a/aunifiedneuralnetworkmodelforreadabilityassessmentwithfeatureprojectionandlengthbalancedloss/layout.json b/aunifiedneuralnetworkmodelforreadabilityassessmentwithfeatureprojectionandlengthbalancedloss/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..1941f60ad065309703a04e063325015d113df49b --- /dev/null +++ b/aunifiedneuralnetworkmodelforreadabilityassessmentwithfeatureprojectionandlengthbalancedloss/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d9023a6844fabc4b4fd9820ab4dc845f9cc571798bde5db9e5942335064b9966 +size 415611 diff --git a/aunifiedpositiveunlabeledlearningframeworkfordocumentlevelrelationextractionwithdifferentlevelsoflabeling/1ecd30bc-4328-45f3-b32f-70c32e4eb08a_content_list.json b/aunifiedpositiveunlabeledlearningframeworkfordocumentlevelrelationextractionwithdifferentlevelsoflabeling/1ecd30bc-4328-45f3-b32f-70c32e4eb08a_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..cf60cd2f6a9d562ea686c452f3be0a2c7da8099b --- /dev/null +++ b/aunifiedpositiveunlabeledlearningframeworkfordocumentlevelrelationextractionwithdifferentlevelsoflabeling/1ecd30bc-4328-45f3-b32f-70c32e4eb08a_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d4ab0714fdd5224c8b1142d88f2e541654adaf078cb41246f8f04dda37d15d22 +size 101677 diff --git a/aunifiedpositiveunlabeledlearningframeworkfordocumentlevelrelationextractionwithdifferentlevelsoflabeling/1ecd30bc-4328-45f3-b32f-70c32e4eb08a_model.json b/aunifiedpositiveunlabeledlearningframeworkfordocumentlevelrelationextractionwithdifferentlevelsoflabeling/1ecd30bc-4328-45f3-b32f-70c32e4eb08a_model.json new file mode 100644 index 0000000000000000000000000000000000000000..6d147b2dfe6910ec1ebb6f05650455c14595e267 --- /dev/null +++ b/aunifiedpositiveunlabeledlearningframeworkfordocumentlevelrelationextractionwithdifferentlevelsoflabeling/1ecd30bc-4328-45f3-b32f-70c32e4eb08a_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7d5aad8871b1eaf240c29e3cfd645e0b9a5134f645c1519389e63a5f08b07f9a +size 120597 diff --git a/aunifiedpositiveunlabeledlearningframeworkfordocumentlevelrelationextractionwithdifferentlevelsoflabeling/1ecd30bc-4328-45f3-b32f-70c32e4eb08a_origin.pdf b/aunifiedpositiveunlabeledlearningframeworkfordocumentlevelrelationextractionwithdifferentlevelsoflabeling/1ecd30bc-4328-45f3-b32f-70c32e4eb08a_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..8b9df76b514ab8ea905d3e68ca7525a68d249434 --- /dev/null +++ b/aunifiedpositiveunlabeledlearningframeworkfordocumentlevelrelationextractionwithdifferentlevelsoflabeling/1ecd30bc-4328-45f3-b32f-70c32e4eb08a_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a9d4cdff7fb1570a9b2a9cd34ae1200e9cbd1fd8f705ab7ae0f41d6f94d0c4ed +size 697465 diff --git a/aunifiedpositiveunlabeledlearningframeworkfordocumentlevelrelationextractionwithdifferentlevelsoflabeling/full.md b/aunifiedpositiveunlabeledlearningframeworkfordocumentlevelrelationextractionwithdifferentlevelsoflabeling/full.md new file mode 100644 index 0000000000000000000000000000000000000000..008e116b0386c7e5b33041690629f508e43df4c7 --- /dev/null +++ b/aunifiedpositiveunlabeledlearningframeworkfordocumentlevelrelationextractionwithdifferentlevelsoflabeling/full.md @@ -0,0 +1,445 @@ +# A Unified Positive-Unlabeled Learning Framework for Document-Level Relation Extraction with Different Levels of Labeling + +Ye Wang $^{1}$ , Xinxin Liu $^{1}$ , Wenxin Hu $^{1*}$ , Tao Zhang $^{2}$ + +1East China Normal University, Shanghai, China + +$^{2}$ Tsinghua University, Beijing, China + +{yewang,xxliu}@stu.ecnu.edu.cn,wxhu@cc.ecnu.edu.cn + +tao-zhan20@mails.tsinghua.edu.cn + +# Abstract + +Document-level relation extraction (RE) aims to identify relations between entities across multiple sentences. Most previous methods focused on document-level RE under full supervision. However, in real-world scenario, it is expensive and difficult to completely label all relations in a document because the number of entity pairs in document-level RE grows quadratically with the number of entities. To solve the common incomplete labeling problem, we propose a unified positive-unlabeled learning framework - shift and squared ranking loss positive-unlabeled (SSR-PU) learning. We use positive-unlabeled (PU) learning on document-level RE for the first time. Considering that labeled data of a dataset may lead to prior shift of unlabeled data, we introduce a PU learning under prior shift of training data. Also, using none-class score as an adaptive threshold, we propose squared ranking loss and prove its Bayesian consistency with multi-label ranking metrics. Extensive experiments demonstrate that our method achieves an improvement of about 14 F1 points relative to the previous baseline with incomplete labeling. In addition, it outperforms previous state-of-the-art results under both fully supervised and extremely unlabeled settings as well. + +# 1 Introduction + +Relation extraction (RE) aims to identify the relations between two entities in a given text. It has rich applications in knowledge graph construction, question answering, and biomedical text understanding. Most of the previous work was to extract relations between entities in a single sentence (Miwa and Bansal, 2016; Zhang et al., 2018). Recently, document-level RE aiming to identify the relations among various entity pairs expressed in multiple sentences has received increasing research + +# Alecu Russo + +1. Alecu Russo[0] (born in March 17, 1819[1], near Chisinäu[2], died on February 5, 1859[3], in Iaşi[4]), was a Moldavian[5] Romanian[6] writer, literary critic and publicist. +2. Russo[0] is credited with having discovered one of the most elaborate forms of the Romanian[6] national folk ballad Miorita[7]. +3. He was also a contributor to the Iasi periodical Zimbrul[8], in which he published one of his best - known works, Studie Moldovan[9] ("Moldovan Studies[9]", in 1851[10] - 1852[11]. +4. He also wrote Iaşi Şi locuitorii lui[12] in 1840[13] "Iaşi and its inhabitants in 1840[13]" - a glimpse into Moldavian[5] society during the Organic Statute[14] administration, and two[15] travel accounts (better described as folklore studies), Piatra Teului[16] and Stanca Corbului[17]. +5. Russo[0] is also notable for his Amintiri[18] ("Recollections[18]", a memoir. + +![](images/f230bd4261ab6a6f46b76e13bccc5308df627ab0c23e394c02365495afe8ac22.jpg) +Figure 1: A case from DocRED. Entities are highlighted in different colors depending on their type. Black arrows indicate relations annotated with the original dataset, orange arrows indicate relations that are re-annotated by (Tan et al., 2022b). + +attention (Yao et al., 2019; Zhou et al., 2021; Xu et al., 2022). + +Previous document-level RE methods mainly deal with fully supervised scenarios. However, in real-world scenarios, incomplete labeling is a common problem in document-level RE because the number of entity pairs grows quadratically with the number of entities. DocRED (Yao et al., 2019) is a popular dataset for document-level RE. Recent studies (Huang et al., 2022; Tan et al., 2022b) found that DocRED, which annotates data with a + +recommend-revise scheme, contains a large number of false negative samples, i.e., many positive samples being unlabeled. As shown in Figure 1, document Alecu Russo contains a large number of unlabeled positive relations. Consequently, the models trained on this dataset tend to overfit in real scenarios and get lower recall. As a result, document-level RE with incomplete labeling has become an emergency need. + +To solve this problem, we propose a unified positive-unlabeled learning framework - shift and squared ranking loss positive- unlabeled (SSR-PU) learning, which can be adapted to labeling under different levels. We use positive-unlabeled (PU) learning for the first time on the document-level RE task. Since document-level RE is a multi-label classification task, we apply a binary PU learning method for each class (one-vs-all), converting it to multi-label PU learning. In addition, according to our observations, a considerable portion of the relations in DocRED, a dataset annotated by recommend-revise scheme, have already been annotated. This leads to the deviation between the prior distribution of the unlabeled data and the overall prior distribution. To address this problem, we introduce an adaptive PU learning under prior shift of training data that adjusts the model based on the estimated overall prior distribution and the labeled positive sample distribution to be similar to ordinary PN learning or ordinary PU learning. Here positive-negative (PN) learning means treating all unlabeled samples as negative samples. + +Also, to distinguish between none-class and predefined classes, we propose a squared ranking loss for none-class ranking such that positive predefined labels are ranked higher than none-class label and negative pre-defined labels are ranked lower. This is an ideal multi-label surrogate loss metric, and we theoretically prove its Bayesian consistency with the multi-label ranking metric proposed by (Zhou and Lee, 2022). This loss function can be well adapted to PU learning. + +We conduct extensive experiments on two multi-label document-level RE datasets with incomplete labeling, DocRED (Yao et al., 2019) and ChemDisGene (Zhang et al., 2022), a newly proposed multi-labeled biomedical document-level RE dataset. Experimental results show that our method SSR-PU outperforms previous baseline that did not consider the labeling incompleteness phenomenon by about 14 F1 points. In addition, we perform fully super + +vised experiments, as well as experiments on an extremely unlabeled data that is newly constructed, in which the number of each relation type labeled in each document is limited to 1. Experiments under two complementary settings demonstrate the effectiveness of our method with different levels of labeling. The contributions of this paper are summarized as follows: + +- We propose a unified positive-unlabeled learning framework, SSR-PU, to adapt document-level RE with different levels of incomplete labeling. +- We apply PU learning for the first time to the document-level RE task and introduce a PU learning under prior shift of training data that can reach a balance between ordinary PN learning and ordinary PU learning based on the estimated prior and labeling distribution. +- We propose squared ranking loss, which effectively improves performance relative to other loss functions, and prove its Bayesian consistency with multi-label ranking metrics. +- Our method achieves state-of-the-art results in a variety of settings and provides a robust baseline for document-level RE with incomplete labels. + +# 2 Related Work + +Document-level relation extraction. Previous generally effective methods for document-level RE are mainly graph-based models and transformer-based models. Graph-based models (Nan et al., 2020; Li et al., 2020; Zeng et al., 2020, 2021; Xu et al., 2021b) gather entity information for relational inference with graph neural networks, and transformer-based methods (Zhou et al., 2021; Xu et al., 2021a; Zhang et al., 2021; Tan et al., 2022a) implicitly capture long-range dependencies. Recently, (Huang et al., 2022; Tan et al., 2022b) found that a large number of positive relations remain unlabeled in document-level RE datasets, especially unpopular relations. However, the previous methods did not consider unlabeled data separately. They simply treated them all as negative samples, which led to a lower recall and a significant drop in performance in realistic scenarios. + +PU learning. Positive-unlabeled (PU) learning (Elkan and Noto, 2008; du Plessis et al., 2014; Plessis et al., 2015; Kiryo et al., 2017; Garg et al., + +2021) aims to learn a classifier from positive and unlabeled data. PU learning is a kind of semi-supervised learning but there is a fundamental difference between them: while semi-supervised learning requires labeled negative data, PU learning requires only labeled positive data. Many current PU learning methods rely on an overall prior estimate, while some recent studies (Charoenphakdee and Sugiyama, 2019; Nakajima and Sugiyama, 2021) have noticed a prior shift between the training set and the test set. On the other hand, PU learning has been used in many NLP applications, e.g., text classification (Li and Liu, 2003), sentence embedding (Cao et al., 2021), named entity recognition (Peng et al., 2019; Zhou et al., 2022), knowledge graph completion (Tang et al., 2022) and sentence-level RE (He et al., 2020). However, this method is rarely applied to the document-level RE task. + +Multi-label classification. Multi-label classification is a widely investigated problem, and here we focus on the loss function. Binary cross entropy (BCE) is the most popular multi-label loss, reducing the multi-label problem to a number of independent binary (one-vs-all) classification tasks. Recently, (Hui and Belkin, 2020) have found that squared loss can also achieve better results in classification tasks. Another common multi-label loss function is pairwise ranking loss, which transforms multi-label learning into a ranking problem via pairwise (one-vs-one) comparison (Furnkranz et al., 2008; Li et al., 2017). For multi-label PU learning, (Kanehira and Harada, 2016) treated it as a multi-label PU ranking problem, and (Aota et al., 2021) applied PU learning to multi-label common vulnerabilities and exposure classification by using one-vs-all strategy. For document-level RE task, (Zhou and Lee, 2022) proposed a none-class ranking multi-label metric. This multi-label metric has not yet been applied to PU learning. + +# 3 Methodology + +In this section, we introduce the details of our method shift and squared ranking loss positive unlabeled (SSR-PU) learning for document-level RE with incomplete labeling. Firstly, we introduce the definition of positive unlabeled learning for document-level RE. Next, we present the PU learning under prior shift of training data. Finally, squared ranking loss using the none-class score as an adaptive threshold is proposed. + +# 3.1 Positive-unlabeled learning for document-level RE + +Document-level RE can be viewed as a multi-label classification task, where each entity pair is an instance and the associated relations are label samples. Previous supervised learning methods only treated unlabeled relations as negative samples, which may lead to low recall in the presence of a large number of false negatives. To address this problem, we adopt PU learning (du Plessis et al., 2014; Plessis et al., 2015) for each class. + +Let $\mathcal{X}$ be an instance space and $\mathcal{Y} = \{-1, + 1\} ^K$ be a label space, where $K$ is the number of predefined classes. An instance $\pmb {x}\in \mathcal{X}$ is associated with a subset of labels, identified by a binary vector $\pmb {y}\in \mathcal{V} = (y_{1},\dots ,y_{K})$ ,where $y_{i} = +1$ if the $i$ -th label is positive for $\pmb{x}$ ,and $y_{i} = -1$ otherwise.A score function is defined as $\pmb {f}(\pmb {x}) =$ $(f_{1}(\pmb {x}),f_{2}(\pmb {x}),\dots,f_{K}(\pmb {x}))$ .In the following we use $f_{i}$ instead, to omit the dependency on $\mathbf{x}$ + +For $i$ -th class, assume that the data follow an unknown probability distribution with density $p(\boldsymbol{x}, y_i)$ , $p_{\mathrm{P}_i} = p(\boldsymbol{x} \mid y_i = +1)$ as the positive marginal, $p_{\mathrm{N}_i} = p(\boldsymbol{x} \mid y_i = -1)$ as the negative marginal, and $p_i(\boldsymbol{x})$ as the marginal. In positive-negative (PN) learning, the goal is to minimize the expected classification risk: + +$$ +R _ {\mathrm {P N}} (f) = \sum_ {i = 1} ^ {K} \mathbb {E} _ {\boldsymbol {x}, y _ {i} \sim p (\boldsymbol {x}, y _ {i})} [ \ell (f _ {i}, y _ {i}) ], \tag {1} +$$ + +Here, Eq.1 can be calculated by equivalently using the sum of the errors of positive and negative samples: + +$$ +\begin{array}{l} R _ {\mathrm {P N}} (f) = \sum_ {i = 1} ^ {K} \left(\pi_ {i} \mathbb {E} _ {\mathrm {P} _ {i}} [ \ell (f _ {i}, + 1) ] \right. \tag {2} \\ + \left(1 - \pi_ {i}\right) \mathbb {E} _ {\mathrm {N} _ {i}} \left[ \ell \left(f _ {i}, - 1\right) \right], \\ \end{array} +$$ + +where $\pi_i = p(y_i = +1)$ and $(1 - \pi_{i}) = (1 - p(y_{i} = +1)) = p(y_{i} = -1)$ is the positive and negative prior of the $i$ -th class. $\mathbb{E}_{\mathrm{P}_i}[\cdot ] = \mathbb{E}_{\boldsymbol {x}\sim p(\boldsymbol {x}|y_i = +1)}[\cdot ],$ $\mathbb{E}_{\mathrm{N}_i}[\cdot ] = \mathbb{E}_{\boldsymbol {x}\sim p(\boldsymbol {x}|y_i = -1)}[\cdot ]$ and the loss function is represented by $\ell$ . Rewriting Eq.2 into a form that uses the data for approximation, we get: + +$$ +\begin{array}{l} \widehat {R} _ {\mathrm {P N}} (f) = \sum_ {i = 1} ^ {K} \left(\frac {\pi_ {i}}{n _ {\mathrm {P} _ {i}}} \sum_ {j = 1} ^ {n _ {\mathrm {P} _ {i}}} \ell \left(f _ {i} \left(\boldsymbol {x} _ {j} ^ {\mathrm {P} _ {i}}\right), + 1\right) \right. \tag {3} \\ + \frac {\left(1 - \pi_ {i}\right)}{n _ {\mathrm {N} _ {i}}} \sum_ {j = 1} ^ {n _ {\mathrm {N} _ {i}}} \ell \left(f _ {i} \left(\boldsymbol {x} _ {j} ^ {\mathrm {N} _ {i}}\right), - 1\right)), \\ \end{array} +$$ + +where $\pmb{x}_j^{\mathrm{P}_i}$ and $\pmb{x}_j^{\mathrm{N}_i}$ denote cases that the $j$ -th sample of class $i$ is positive or negative. $n_{\mathrm{P}_i}$ and $n_{\mathrm{N}_i}$ are the number of positive and negative samples of class $i$ , respectively. + +In positive-unlabeled (PU) learning, due to the absence of negative samples, we cannot estimate $\mathbb{E}_{\mathrm{N}_i}[\cdot ]$ from the data. Following (du Plessis et al., 2014), PU learning assumes that unlabeled data can reflect the true overall distribution, that is, $p_{\mathrm{U}_i}(\boldsymbol {x}) = p_i(\boldsymbol {x})$ . The expected classification risk formulation can be defined as: + +$$ +R _ {\mathrm {P U}} (f) = \sum_ {i = 1} ^ {K} \left(\pi_ {i} \mathbb {E} _ {\mathrm {P} _ {i}} [ \ell \left(f _ {i}, + 1\right) ] + \right. \tag {4} +$$ + +$$ +\mathbb {E} _ {\mathrm {U} _ {i}} [ \ell (f _ {i}, - 1) ] - \pi_ {i} \mathbb {E} _ {\mathrm {P} _ {i}} [ \ell (f _ {i}, - 1) ]), +$$ + +Here $\mathbb{E}_{\mathrm{U}_i}[\cdot ] = \mathbb{E}_{\boldsymbol {x}\sim p_i(\boldsymbol {x})}[\cdot ]$ and $\mathbb{E}_{\mathrm{U}_i}[\ell (f_i, - 1)]$ $-\pi_{i}\mathbb{E}_{\mathrm{P}_{i}}[\ell (f_{i}, - 1)]$ can alternatively represent $(1-$ $\pi_i)\mathbb{E}_{\mathrm{N}_i}[\ell (f_i, - 1)]$ because $p_i(\pmb {x}) = \pi_ip_{\mathrm{P}_i}(\pmb {x})+$ $(1 - \pi_{i})p_{\mathrm{N}_{i}}(\pmb {x})$ + +By rewriting Eq.4 in a form that can be approximated using the data, we get the following: + +$$ +\begin{array}{l} \widehat {R} _ {\mathrm {P U}} (f) = \sum_ {i = 1} ^ {K} \left(\frac {\pi_ {i}}{n _ {\mathrm {P} _ {i}}} \sum_ {j = 1} ^ {n _ {\mathrm {P} _ {i}}} \ell \left(f _ {i} \left(\boldsymbol {x} _ {j} ^ {\mathrm {P} _ {i}}\right), + 1\right) \right. \\ + \left[ \frac {1}{n _ {\mathrm {U} _ {i}}} \sum_ {j = 1} ^ {n _ {\mathrm {U} _ {i}}} \ell \left(f _ {i} \left(\boldsymbol {x} _ {j} ^ {\mathrm {U} _ {i}}\right), - 1\right) \right. \tag {5} \\ \left. - \frac {\pi_ {i}}{n _ {\mathrm {P} _ {i}}} \sum_ {j = 1} ^ {n _ {\mathrm {P} _ {i}}} \ell \left(f _ {i} \left(\boldsymbol {x} _ {j} ^ {\mathrm {P} _ {i}}\right), - 1\right)\right]\left. \right), \\ \end{array} +$$ + +where $\pmb{x}_j^{\mathrm{U}_i}$ denote cases that the $j$ -th sample is unlabeled as class $i$ and $n_{\mathrm{U}_i}$ is the number of samples unlabeled as class $i$ . + +However, the second term in Eq.5 can be negative and can be prone to overfitting when using a highly flexible model. Thus, a non-negative risk estimator (Kiryo et al., 2017) is proposed to alleviate the overfitting problem: + +$$ +\begin{array}{l} \widehat {R} _ {\mathrm {P U}} (f) = \sum_ {i = 1} ^ {K} (\frac {\pi_ {i}}{n _ {\mathrm {P} _ {i}}} \sum_ {j = 1} ^ {n _ {\mathrm {P} _ {i}}} \ell (f _ {i} (\pmb {x} _ {j} ^ {\mathrm {P} _ {i}}), + 1) + \\ \max \left(0, \left[ \frac {1}{n _ {\mathrm {U} _ {i}}} \sum_ {j = 1} ^ {n _ {\mathrm {U} _ {i}}} \ell \left(f _ {i} \left(\boldsymbol {x} _ {j} ^ {\mathrm {U} _ {i}}\right), - 1\right) \right. \right. \tag {6} \\ - \frac {\pi_ {i}}{n _ {\mathrm {P} _ {i}}} \sum_ {j = 1} ^ {n _ {\mathrm {P} _ {i}}} \ell (f _ {i} (\boldsymbol {x} _ {j} ^ {\mathrm {P} _ {i}}), - 1) ]). \\ \end{array} +$$ + +For $\ell$ , we use the convex function squared loss: + +$$ +\ell \left(f _ {i}, y _ {i}\right) = \frac {1}{4} \left(y _ {i} f _ {i} - 1\right) ^ {2}, \tag {7} +$$ + +![](images/14823aaf0614c48810a940b7a6f5afc26f6981de71e82435a0855139a0a1ee9e.jpg) +Figure 2: Positive sample distribution shift after labeled, i.e., $p(A \mid \overline{C}) \neq p(A)$ + +and we compare the performance of using squared loss and log-sigmoid loss functions in Section 4.4. The latter is a convex loss function commonly used in classification. + +In addition, to solve the heavy class imbalance problem, we multiply $\gamma_{i} = \left(\frac{1 - \pi_{i}}{\pi_{i}}\right)^{0.5}$ before positive risk estimations as the class weight. + +# 3.2 Class prior shift of training data + +Ordinary PU learning requires an assumption that the overall distribution needs to be the same as the distribution of the unlabeled data. In contrast, with the document-level RE dataset constructed by a recommend-revise scheme, many relations are probably already annotated, especially the common ones. This leads to a prior shift in the unlabeled data of the training set. When this assumption is broken, ordinary PU learning will yield a biased result. To address this problem, inspired by the method (Charoenphakdee and Sugiyama, 2019) for handling a prior shift between the test set and the training set, we introduce the PU learning under prior shift of training data. + +For each class, assume that the original prior $\pi_i = p(y_i = +1)$ . We set $\pi_{labeled,i} = p(s_i = +1)$ and $(1 - \pi_{labeled,i}) = (1 - p(s_i = +1)) = p(s_i = -1)$ where $s_i = +1$ or $s_i = -1$ mean that the $i$ -th class is labeled or unlabeled, respectively. As shown in Figure 2, the conditional probability of a positive sample under unlabeled data is different from the probability of an overall positive sample. The conditional probability of a positive sample under unlabeled data is: + +$$ +p \left(y _ {i} = 1 \mid s _ {i} = - 1\right) = \frac {p \left(y _ {i} = 1 , s _ {i} = - 1\right)}{p \left(s _ {i} = - 1\right)}, \tag {8} +$$ + +where $p(y_{i} = 1,s_{i} = -1) = \pi_{i} - \pi_{\text{labeled},i}$ , we can obtain the prior of positive samples in the new unlabeled data after labeling as $\pi_{u,i} = p(y_i = 1|s_i = -1) = \frac{\pi_i - \pi_{\text{labeled},i}}{1 - \pi_{\text{labeled},i}}$ . + +For document-level RE, the goal is to minimize the following misclassification risk for the original distribution of the training data: + +$$ +\begin{array}{l} R _ {\mathrm {o r i}} (f) = \sum_ {i = 1} ^ {K} \left(\pi_ {i} \mathbb {E} _ {\mathrm {P} _ {i}} [ \ell \left(f _ {i}, + 1\right) ] \right. \tag {9} \\ + \left(1 - \pi_ {i}\right) \mathbb {E} _ {\mathrm {N} _ {i}} \left[ \ell \left(f _ {i}, - 1\right) \right]. \\ \end{array} +$$ + +We can express $R_{\mathrm{ori}}(f)$ using the expectation of positive and unlabeled data by the following theorem. + +Theorem 1. The misclassification risk $R_{\mathrm{ori}}(f)$ can be equivalently expressed as + +$$ +\begin{array}{l} R _ {\mathrm {S} - \mathrm {P U}} (f) = \sum_ {i = 1} ^ {K} \left(\pi_ {i} \mathbb {E} _ {\mathrm {P} _ {i}} [ \ell (f _ {i}, + 1) ] \right. \\ + \frac {1 - \pi_ {i}}{1 - \pi_ {u , i}} \mathbb {E} _ {\mathrm {U} _ {i}} [ \ell (f _ {i}, - 1) ] \tag {10} \\ - \frac {\pi_ {u , i} - \pi_ {u , i} \pi_ {i}}{1 - \pi_ {u , i}} \mathbb {E} _ {\mathrm {P} _ {i}} [ \ell (f _ {i}, - 1) ]). \\ \end{array} +$$ + +Proof. Proof appears in Appendix A.1. + +![](images/64d3502efb13de4f3f2737ae1cafdcc9ab482b4c45d6b11abe5136c1ef294408.jpg) + +As a result, we can obtain the non-negative risk estimator (Kiryo et al., 2017) under class prior shift of training data as follows: + +$$ +\begin{array}{l} \widehat {R} _ {\mathrm {S - P U}} (f) = \sum_ {i = 1} ^ {K} \left(\frac {1}{n _ {\mathrm {P} _ {i}}} \pi_ {i} \sum_ {j = 1} ^ {n _ {\mathrm {P} _ {i}}} \ell \left(f _ {i} \left(\boldsymbol {x} _ {j} ^ {\mathrm {P} _ {i}}\right), + 1\right) \right. \\ + \max (0, [ \frac {1}{n _ {\mathrm {U} _ {i}}} \frac {1 - \pi_ {i}}{1 - \pi_ {u , i}} \sum_ {j = 1} ^ {n _ {\mathrm {U} _ {i}}} \ell (f _ {i} (\boldsymbol {x} _ {j} ^ {\mathrm {U} _ {i}}), - 1) \\ - \frac {1}{n _ {\mathrm {P} _ {i}}} \frac {\pi_ {u , i} - \pi_ {u , i} \pi_ {i}}{1 - \pi_ {u , i}} \sum_ {j = 1} ^ {n _ {\mathrm {P} _ {i}}} \ell \left(f _ {i} \left(\boldsymbol {x} _ {j} ^ {\mathrm {P} _ {i}}\right), - 1\right) \bigg ]. \tag {11} \\ \end{array} +$$ + +We can observe that PN learning and PU learning are special cases of this function. When $\pi_{u,i} = 0$ , this equation reduces to the form of ordinary PN learning, and when $\pi_{u,i} = \pi_i$ , this equation reduces to the form of ordinary PU learning. + +# 3.3 Squared ranking loss + +To better measure the performance of document-level RE, (Zhou and Lee, 2022) proposed a new multi-label performance measure: + +$$ +\begin{array}{l} L _ {\mathrm {N A}} (\boldsymbol {f}, \boldsymbol {y}) = \sum_ {i = 1} ^ {K} \left(\llbracket y _ {i} > 0 \rrbracket \llbracket f _ {i} < f _ {0} \rrbracket \right. \tag {12} \\ + \llbracket y _ {i} \leq 0 \rrbracket \llbracket f _ {i} > f _ {0} \rrbracket + \frac {1}{2} \llbracket f _ {i} = f _ {0} \rrbracket), \\ \end{array} +$$ + +
DatasetDocREDChemDisGene
traintesttraintest
# docs3,05350076,942523
# rels9614
Avg # ents19.519.67.510.0
Avg # rels12.534.92.17.2
+ +Table 1: Statistics of Document-level RE Datasets + +where positive pre-defined labels should be ranked higher than the none-class label and negative ones should be ranked below. $\llbracket \cdot \rrbracket$ is an indicator function that takes the value of 1 when the conditions in the parentheses are met, otherwise 0. + +However, it is difficult to optimize the above equation directly. Thus, we propose the squared ranking surrogate loss by rewriting Eq.7 as: + +$$ +\ell_ {\mathrm {S R}} \left(f _ {i}, y _ {i}\right) = \frac {1}{4} \left(y _ {i} \left(f _ {i} - f _ {0}\right) - \text {m a r g i n}\right) ^ {2}, \tag {13} +$$ + +where margin is a hyper-parameter and $f_0$ is the none-class score, when $f_i$ is greater than $f_0$ the label exists, and otherwise not. + +Next we prove the Bayesian consistency of $\ell_{\mathrm{SR}}$ with the multi-label ranking metric $L_{\mathrm{NA}}$ when margin $\neq 0$ . Given an instance $\mathbf{x}$ , let $\Delta_i = \mathrm{P}(y_i = 1 \mid x)$ be the marginal probability when the $i$ -th label is positive, the Bayes optimal score function $\pmb{f}_{\mathrm{NA}}^*$ that minimizes the multi-label risk $\mathbb{E}[L_{\mathrm{NA}}(\mathrm{P}, \pmb{f}) \mid \pmb{x}]$ is given by: + +$$ +\begin{array}{l} \boldsymbol {f} _ {\mathrm {N A}} ^ {*} \in \left\{\boldsymbol {f}: f _ {i} > f _ {0} \text {i f} \Delta_ {i} > \frac {1}{2}, \right. \tag {14} \\ a n d f _ {i} < f _ {0} i f \Delta_ {i} < \frac {1}{2} \}. \\ \end{array} +$$ + +The next theory guarantees that the classifier obtained by minimizing the surrogate loss $\ell_{\mathrm{SR}}$ converges to the classifier with the lowest multi-label risk, thus making it possible to achieve a better classification performance w.r.t. corresponding to the multi-label performance metric. + +Theorem 2. $\ell_{\mathrm{SR}}$ (Eq.13) is Bayes consistent w.r.t. $L_{\mathrm{NA}}$ (Eq.12) when margin $\neq 0$ + +Proof. Proof appears in Appendix A.2. + +As a supplement, we likewise compare the log-sigmoid ranking loss performance in Section 4.4. + +# 4 Experiments + +In this section, we evaluate our method on two multi-label document-level RE datasets with in + +
ModelIgn F1F1PR
BiLSTM*32.57 ± 0.2232.86 ± 0.2277.04 ± 1.0120.89 ± 0.17
GAIN+BERT*Base45.57 ± 1.3645.82 ± 1.3888.11 ± 1.0730.98 ± 1.36
DocuNET+RoBERTa*Large45.88 ± 0.3345.99 ± 0.3394.16 ± 0.3230.42 ± 0.29
ATLOP+BERT*Base43.12 ± 0.2443.25 ± 0.2592.49 ± 0.3328.23 ± 0.23
PN+ATLOP+BERT*Base51.11 ± 0.4951.68 ± 0.4077.55 ± 3.1038.79 ± 0.49
SR-PN+ATLOP+BERT*Base52.70 ± 0.2853.10 ± 0.2683.76 ± 0.4938.87 ± 0.23
PU+ATLOP+BERT*Base51.80 ± 1.1153.14 ± 1.0158.81 ± 2.4148.15 ± 0.14
SR-PU+ATLOP+BERT*Base53.87 ± 0.2755.06 ± 0.2563.42 ± 0.6448.66 ± 0.11
S-PU+ATLOP+BERT*Base53.36 ± 1.2254.44 ± 1.1265.95 ± 2.8446.38 ± 0.22
SSR-PU+ATLOP+BERT*Base55.21 ± 0.1256.14 ± 0.1270.42 ± 0.1846.67 ± 0.14
ATLOP+RoBERTa*Large45.09 ± 0.2645.19 ± 0.2794.75 ± 0.2529.67 ± 0.24
PN+ATLOP+RoBERTa*Large54.21 ± 0.3454.47 ± 0.3589.22 ± 0.3639.20 ± 0.41
SR-PN+ATLOP+RoBERTa*Large56.06 ± 0.2156.39 ± 0.2387.47 ± 0.6041.61 ± 0.39
PU+ATLOP+RoBERTa*Large56.97 ± 0.4758.04 ± 0.4367.39 ± 1.2250.98 ± 0.39
SR-PU+ATLOP+RoBERTa*Large57.64 ± 0.2558.77 ± 0.2666.39 ± 0.4752.72 ± 0.44
S-PU+ATLOP+RoBERTa*Large58.19 ± 0.2458.95 ± 0.2575.68 ± 0.3648.29 ± 0.40
SSR-PU+ATLOP+RoBERTa*Large58.68 ± 0.4359.50 ± 0.4574.21 ± 0.5349.67 ± 0.77
+ +Table 2: Results on Re-DocRED revised test set. Results with * are based on our implementation. + +complete labeling. We also demonstrate the effectiveness of our method with different levels of labeling. + +# 4.1 Experimental Setups + +Datasets. DocRED (Yao et al., 2019) is a large-scale document-level RE dataset with 96 predefined relations constructed by a recommend-revise scheme from Wikipedia. (Tan et al., 2022b) observed a large number of false negatives in the annotation of DocRED and provided a high-quality revised version, Re-DocRED. In our experiments, we use the incompletely labeled DocRED original training set for training and the revised test set for testing. ChemDisGene (Zhang et al., 2022) is a newly proposed biomedical multi-label document-level RE dataset. This corpus is automatically derived from CTD database (Davis et al., 2021) by distantly supervised method and has 523 abstracts labeled by domain experts as an additional All relationships test set. We use the distantly supervised training set for training and the All relationships test set for testing. The average number of relations per document in the test set on both two datasets is much larger than the average number of relations in the training set, which indicates the incomplete labeling phenomenon in the training set, with a large number of false negatives present. The statistics of the two datasets are listed in Table 1. + +Implementation details. For each dataset, we + +use ATLOP (Zhou et al., 2021) as the encoding model for the representation learning of relations. Further, we apply cased BERTBase (Devlin et al., 2019) and RoBERTaLarge (Liu et al., 2019) for DocRED and PubmedBert (Gu et al., 2021) for ChemDisGene. We use Huggingface's Transformers (Wolf et al., 2020) to implement all the models and AdamW (Loshchilov and Hutter, 2019) as the optimizer, and apply a linear warmup (Goyal et al., 2017) at the first $6\%$ steps followed by a linear decay to 0. For DocRED, we set the learning rates for BERTBase and RoBERTaLarge settings to 5e-5 and 3e-5, respectively, in the same way as ATLOP. For ChemDisGene, the learning rate is set to 2e-5. The batch size (number of documents per batch) is set to 4 and 8 for two datasets, respectively. During our experiment, we set $\pi_i = 3\pi_{labeled,i}$ and margin = 0.25. To evaluate the efficacy of our methods in realistic settings, we do not use any fully labeled validation or test sets in any stage of the training process. The training stopping criteria are set as follows: 30 epochs for both two dataset. We report the performance of the final model instead of the best checkpoint. All experiments are conducted with 1 Tesla A100-40G GPU. + +Baseline. We re-implemented the existing fully supervised methods BiLSTM (Yao et al., 2019), GAIN (Zeng et al., 2020), DocuNET (Zhang et al., 2021) and ATLOP (Zhou et al., 2021) as the baseline models for DocRED in this new setup, where + +
ModelF1PR
BRAN†32.541.826.6
PubmedBert†42.164.331.3
BRAN+PubmedBert†43.870.931.6
ATLOP+PubmedBert*42.73 ± 0.3676.17 ± 0.5429.70 ± 0.36
PN+ATLOP+PubmedBert44.25 ± 0.2473.46 ± 0.9531.67 ± 0.16
SR-PN+ATLOP+PubmedBert46.56 ± 0.3569.84 ± 0.5434.93 ± 0.40
PU+ATLOP+PubmedBert44.60 ± 0.7046.56 ± 1.1742.80 ± 0.35
SR-PU+ATLOP+PubmedBert45.86 ± 0.3846.91 ± 0.7944.86 ± 0.37
S-PU+ATLOP+PubmedBert46.73 ± 0.4953.95 ± 1.1441.23 ± 0.36
SSR-PU+ATLOP+PubmedBert48.56 ± 0.2354.27 ± 0.4043.93 ± 0.32
+ +Table 3: Results on ChemDisGene All relationships test set. Results with $\dagger$ are reported from (Zhang et al., 2022). Results with $*$ are based on our implementation. + +for GAIN and BiLSTM we use a fixed threshold of 0.5 and all methods take the final result of the model instead of the best checkpoint. For ChemDisGene, we used BRAN (Verga et al., 2018), PubmedBert (Gu et al., 2021) and PubmedBert + BRAN mentioned in (Zhang et al., 2022) as the baseline models, and ATLOP is re-implemented as a supplementary baseline. + +Evaluation metric. For DocRED, we use the micro F1 (F1), micro ignore F1 (Ign F1), precision (P) and recall (R) as the evaluation metrics to evaluate the overall performance of a model. Ign F1 measures the F1 score excluding the relations shared by the training and test set. For ChemDisGene, we use micro F1 (F1), precision (P) and recall (R) as the evaluation metrics. + +# 4.2 Main Results + +In this subsection, we present the results of comparison of PN learning (PN), squared ranking loss PN learning (SR-PN), PU learning (PU), squared ranking loss PU learning (SR-PU), PU learning under prior shift of training data (S-PU) and SSR-PU. All methods use the same encoder and different loss functions. For each method, we use the same hyper-parameter settings and report the mean and standard deviation on the test set by conducting 5 runs with different random seeds (62, 63, 64, 65, 66). + +Results on DocRED. As shown in Table 2, our SSR-PU method achieves a state-of-the-art F1 and Ign F1 in both BERTBase and RoBERTaLarge settings and outperforms the original ATLOP by 13.58 and 14.52 F1 points, respectively. Meanwhile, consistent with the observation in the paper (Huang et al., 2022), existing document-level RE + +methods under full supervision have a significant performance degradation in the incompletely labeled scenario. + +The original ATLOP method has the highest precision (P) but low recall (R), which implies that supervised learning methods that simply treat unlabeled data as negative samples lack the generalization ability to extract instances of relations that are systematically missed in the dataset. PN learning uses an estimated prior, but will yield a biased result because there are still positive samples in the unlabeled data. While PU learning uses both unlabeled and labeled data to better estimate the expectation of negative samples, which results in a higher recall rate. In addition, ordinary PU methods without prior shift overestimate the content of positive samples in unlabeled data, which means that the model will tend to identify more samples as positive, i.e., higher recall, but also leads to more false-positive prediction results, i.e., lower precision. In contrast, the S-PU method with prior shift effectively mitigates this phenomenon by bringing the positive samples estimated by the model in the unlabeled data closer to their true distribution. For example, in experiments under the BERTBase setting, there is a small decrease in recall of less than 2 percentage points, while the precision improves by about 7 percentage points, leading to an improvement in the final results. And this phenomenon is more evident in common relations as analyzed in Section 4.4. Finally, applying squared ranking loss in PN learning, PU learning and S-PU learning can further improve the performance of the model, demonstrating the effectiveness of the method with none-class score as an adaptive threshold for document-level RE. + +
ModelIgn F1F1
ATLOP+BERT*Base72.7073.47
SSR-PU+BERT*Base72.9174.33
ATLOP+RoBERTa*Large76.9277.58
DocuNET+RoBERTa†Large77.2777.92
KD-DocRE+RoBERTa†Large77.6378.35
SSR-PU+RoBERTaLarge77.6778.86
+ +Table 4: Results on Re-DocRED revised test set under the fully supervised setting. Results with $\dagger$ are reported from (Tan et al., 2022b). Results with $*$ are based on our implementation. + +
ModelIgn F1F1
ATLOP+BERT*Base16.9917.01
SSR-PU+BERT*Base46.4747.24
ATLOP+RoBERTa*Large17.2917.31
SSR-PU+RoBERTa*Large48.9849.74
+ +Results on ChemDisGene. As shown in Table 3, the improvement of our method agrees with the results on DocRED, reaching the state-of-the-art F1, which is 5.83 F1 points higher than the original ATLOP. Notice that the improvement on ChemDisGene is not as dramatic as that on DocRED. We argue that this may be due to the fact that some of the documents in the extra annotated All relationships test set are from another corpus DrugProt (Miranda et al., 2021), and that the annotation by human experts has a large deviation from the original training set distribution. This suggests that it is a challenging direction to make the document-level RE model more generalizable when it is difficult to estimate the true distribution of the test set. + +# 4.3 Different Levels of Labeling + +Fully supervised setting. In this setting, we set $\pi_{i} = \pi_{\text{labeled},i}$ and other hyper-parameters identically. As shown in Table 4, we use the (Tan et al., 2022b) revised Re-DocRED dataset in the same fully supervised setting to compare with the current state-of-the-art baseline models ATLOP (Zhou et al., 2021), DocuNET (Zhang et al., 2021) and KD-DocRE (Tan et al., 2022a). Our method achieves the same state-of-the-art results, demonstrating the effectiveness of our method with full labeling. The result with this setting can be seen as an upper bound for document-level RE with in + +Table 5: Results on Re-DocRED revised test set under the extremely unlabeled setting. Results with * are based on our implementation. + +
ModelFreq. F1Freq. PFreq. R
SR-PN60.7987.8346.49
SR-PU62.4360.2864.74
SSR-PU64.8868.3661.74
+ +Table 6: Results for the 10 most common relation types on Re-DocRED test set under the BERT ${}_{\text{Base }}$ setting. + +
ModelFreq. F1Freq. PFreq. R
SR-PN47.6271.7635.64
SR-PU47.6544.3551.48
SSR-PU50.9152.0949.78
+ +Table 7: Results for the 5 most common relation types on ChemDisGene All relationships test set. + +complete labeling. More details of the experiment are shown in Appendix A.3. + +Extremely unlabeled setting. In this setting, we use the original training set of DocRED to construct an extremely unlabeled training set, i.e., the number of labels for each relation type in the document being limited to 1. The average number of relations in the processed documents is reduced to 5.4. We consider this a more difficult and challenging scenario. We set $\pi_i = 12\pi_{\text{labeled},i}$ and other hyper-parameters identically. As shown in Table 5, traditional supervised learning methods fail, while our proposed SSR-PU method still yields a robust result. It is worth noting that since the labeled sample is only a fraction of the true positive sample, i.e., the biased distribution, which means $p(x \mid y_i = 1)$ is not equal to $p(x \mid s_i = 1)$ , the first term in Eq.11 is actually a biased approximation to the first term in Eq.10. We consider this bias as one of the bottlenecks of the current method and the main reason why the method degrades a lot in extremely unlabeled scenarios, i.e., the bias is widened in extremely unlabeled scenarios. This is a good direction for future research, where possible solutions might involve adding some data augmentation or bootstrapping methods for labeling to alleviate this bias. More details of the experiment are shown in Appendix A.4. + +# 4.4 Additional Analysis + +Analysis of common relations. As shown in table 6 and table 7, we show the results for common relations on DocRED and ChemDisGene, these frequent relation types account for about $60\%$ of the relation triples (Tan et al., 2022b; Zhang et al., 2022). It can be seen that the SR-PU method has + +
ModelIgn F1F1
S-PUlog-sigmoid52.2353.43
S-PU squared54.0055.01
S-PUlog-sigmoid ranking52.4253.66
SSR-PU55.4356.36
+ +Table 8: Results on Re-DocRED test set under the BERTBase setting with different loss functions. + +a slightly higher recall and much lower precision, which corresponds to an overestimation of the positive sample size in the unlabeled data. The SSR-PU method, on the other hand, can alleviate this problem well, contributing to a better balance among precision and recall and better performance. This indicates a large amount of prior shift in common relations, which is consistent with (Huang et al., 2022) observation that common relations are more likely to be labeled in the dataset. + +Comparison with other loss functions. We compare the squared loss with the log-sigmoid loss, which is commonly used in multi-label classification at the document-level RE. And again, this loss function is rewritten into a none-class ranking form for further comparison with squared ranking loss. The details of the loss function are listed in Appendix A.5. As shown in Table 8, both the squared loss function and the squared ranking loss function are significantly improved compared to the other loss functions, which demonstrates the effectiveness of our proposed loss function in the multi-label document-level RE task. + +# 5 Conclusion and Future Work + +In this paper, we propose a unified positive-unlabeled learning framework, SSR-PU, which can effectively solve the incomplete labeling of document-level RE. We use PU learning on document-level RE for the first time and introduce a PU learning under prior shift of training data to adapt to different levels of labeling. Also, we propose squared ranking loss, using none-class score as an adaptive threshold. Experiments demonstrate that our method achieves state-of-the-art results with different levels of labeling and provides a robust new baseline for incompletely labeled document-level RE. In the future, we will consider methods that do not require estimation of priors, allowing generalization to unknown distributions more accurately, as well as addressing the problem of biased distributions with incomplete + +labeled positive samples and further improving the extraction performance of long-tail relations. + +# Limitations + +Regarding the limitations of our proposed method, our method requires an estimation of an overall prior that will affect the final result. In a realistic scenario, a very accurate prior estimation may be difficult to obtain. In addition, the biased distribution caused by the incomplete labeling of positive samples is one of the bottlenecks of the current method, and there is still much left to be improved for extremely unlabeled scenarios and scenarios where the gap between the test set and the training set distribution is too large, which can be a direction for further research. However, for now, we believe that our task is a valuable contribution to advancing the application of document-level RE in more realistic scenarios and provides a robust baseline for this direction. + +# Acknowledgements + +We sincerely thank all anonymous reviewers for their valuable comments to improve our work. This research is funded by the Basic Research Project of Shanghai Science and Technology Commission (No.19JC1410101). The computation is supported by ECNU Multifunctional Platform for Innovation (001). + +# References + +Masaki Aota, Tao Ban, Takeshi Takahashi, and Noboru Murata. 2021. Multi-label positive and unlabeled learning and its application to common vulnerabilities and exposure categorization. In 2021 IEEE 20th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom), pages 988-996. +Lele Cao, Emil Larsson, Vilhelm von Ehrenheim, Dhiana Deva Cavalcanti Rocha, Anna Martin, and Sonja Horn. 2021. PAUSE: Positive and annealed unlabeled sentence embedding. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 10096-10107, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. +Nontawat Charoenphakdee and Masashi Sugiyama. 2019. Positive-unlabeled classification under class prior shift and asymmetric error. In Proceedings of the 2019 SIAM International Conference on Data Mining, pages 271-279. SIAM. + +Allan Peter Davis, Cynthia J Grondin, Robin J Johnson, Daniela Sciaky, Jolene Wiegers, Thomas C Wiegers, and Carolyn J Mattingly. 2021. Comparative toxicogenomics database (ctd): update 2021. *Nucleic acids research*, 49(D1):D1138–D1143. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. +Marthinus C du Plessis, Gang Niu, and Masashi Sugiyama. 2014. Analysis of learning from positive and unlabeled data. In Advances in Neural Information Processing Systems, volume 27. Curran Associates, Inc. +Charles Elkan and Keith Noto. 2008. Learning classifiers from only positive and unlabeled data. In Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '08, page 213-220, New York, NY, USA. Association for Computing Machinery. +Johannes Fornkranz, Eyke Hüllermeier, Eneldo Loza Mencia, and Klaus Brinker. 2008. Multilabel classification via calibrated label ranking. Machine learning, 73(2):133-153. +Saurabh Garg, Yifan Wu, Alexander J Smola, Sivaraman Balakrishnan, and Zachary Lipton. 2021. Mixture proportion estimation and pu learning: a modern approach. In Advances in Neural Information Processing Systems, volume 34, pages 8532-8544. Curran Associates, Inc. +Priya Goyal, Piotr Dálár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. 2017. Accurate, large minibatch sgd: TrainingImagenet in 1 hour. arXiv preprint arXiv:1706.02677. +Yu Gu, Robert Tinn, Hao Cheng, Michael Lucas, Naoto Usuyama, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, and Hoifung Poon. 2021. Domain-specific language model pretraining for biomedical natural language processing. ACM Trans. Comput. Healthcare, 3(1). +Zhengqiu He, Wenliang Chen, Yuyi Wang, Wei Zhang, Guanchun Wang, and Min Zhang. 2020. Improving neural relation extraction with positive and unlabeled learning. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05):7927-7934. +Quzhe Huang, Shibo Hao, Yuan Ye, Shengqi Zhu, Yansong Feng, and Dongyan Zhao. 2022. Does recommend-revise produce reliable annotations? an analysis on missing instances in DocRED. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long + +Papers), pages 6241-6252, Dublin, Ireland. Association for Computational Linguistics. +Like Hui and Mikhail Belkin. 2020. Evaluation of neural architectures trained with square loss vs cross-entropy in classification tasks. In International Conference on Learning Representations. +Atsushi Kanehira and Tatsuya Harada. 2016. Multi-label ranking from positive and unlabeled data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). +Ryuichi Kiryo, Gang Niu, Marthinus C du Plessis, and Masashi Sugiyama. 2017. Positive-unlabeled learning with non-negative risk estimator. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc. +Bo Li, Wei Ye, Zhonghao Sheng, Rui Xie, Xiangyu Xi, and Shikun Zhang. 2020. Graph enhanced dual attention network for document-level relation extraction. In Proceedings of the 28th International Conference on Computational Linguistics, pages 1551-1560, Barcelona, Spain (Online). International Committee on Computational Linguistics. +Xiaoli Li and Bing Liu. 2003. Learning to classify texts using positive and unlabeled data. In Proceedings of the 18th International Joint Conference on Artificial Intelligence, IJCAI'03, page 587-592, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. +Yuncheng Li, Yale Song, and Jiebo Luo. 2017. Improving pairwise ranking for multi-label image classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. +Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In International Conference on Learning Representations. +Antonio Miranda, Farrokh Mehryary, Jouni Luoma, Sampo Pyysalo, Alfonso Valencia, and Martin Krallinger. 2021. Overview of drugprot biocreative vii track: quality evaluation and large scale text mining of drug-gene/protein relations. In Proceedings of the seventh BioCreative challenge evaluation workshop. +Makoto Miwa and Mohit Bansal. 2016. End-to-end relation extraction using LSTMs on sequences and tree structures. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1105-1116, Berlin, Germany. Association for Computational Linguistics. +Shota Nakajima and Masashi Sugiyama. 2021. Positive unlabeled classification under class-prior shift: A prior-invariant approach based on density ratio estimation. arXiv preprint arXiv:2107.05045. + +Guoshun Nan, Zhijiang Guo, Ivan Sekulic, and Wei Lu. 2020. Reasoning with latent structure refinement for document-level relation extraction. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1546-1557, Online. Association for Computational Linguistics. +Minlong Peng, Xiaoyu Xing, Qi Zhang, Jinlan Fu, and Xuanjing Huang. 2019. Distantly supervised named entity recognition using positive-unlabeled learning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2409-2419, Florence, Italy. Association for Computational Linguistics. +Martinus Du Plessis, Gang Niu, and Masashi Sugiyama. 2015. Convex formulation for learning from positive and unlabeled data. In Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pages 1386-1394, Lille, France. PMLR. +Qingyu Tan, Ruidan He, Lidong Bing, and Hwee Tou Ng. 2022a. Document-level relation extraction with adaptive focal loss and knowledge distillation. In *Findings of the Association for Computational Linguistics: ACL* 2022, pages 1672-1681, Dublin, Ireland. Association for Computational Linguistics. +Qingyu Tan, Lu Xu, Lidong Bing, and Hwee Tou Ng. 2022b. Revisiting docred-addressing the overlooked false negative problem in relation extraction. arXiv preprint arXiv:2205.12696. +Zhenwei Tang, Shichao Pei, Zhao Zhang, Yongchun Zhu, Fuzhen Zhuang, Robert Hoehndorf, and Xiangliang Zhang. 2022. Positive-unlabeled learning with adversarial data augmentation for knowledge graph completion. In Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22, pages 2248-2254. International Joint Conferences on Artificial Intelligence Organization. Main Track. +Patrick Verga, Emma Strubell, and Andrew McCallum. 2018. Simultaneously self-attending to all mentions for full-abstract biological relation extraction. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 872-884, New Orleans, Louisiana. Association for Computational Linguistics. +Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System + +Demonstrations, pages 38-45, Online. Association for Computational Linguistics. +Benfeng Xu, Quan Wang, Yajuan Lyu, Yong Zhu, and Zhendong Mao. 2021a. Entity structure within and throughout: Modeling mention dependencies for document-level relation extraction. Proceedings of the AAAI Conference on Artificial Intelligence, 35(16):14149-14157. +Wang Xu, Kehai Chen, Lili Mou, and Tiejun Zhao. 2022. Document-level relation extraction with sentences importance estimation and focusing. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2920-2929, Seattle, United States. Association for Computational Linguistics. +Wang Xu, Kehai Chen, and Tiejun Zhao. 2021b. Document-level relation extraction with reconstruction. Proceedings of the AAAI Conference on Artificial Intelligence, 35(16):14167-14175. +Yuan Yao, Deming Ye, Peng Li, Xu Han, Yankai Lin, Zhenghao Liu, Zhiyuan Liu, Lixin Huang, Jie Zhou, and Maosong Sun. 2019. DocRED: A large-scale document-level relation extraction dataset. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 764-777, Florence, Italy. Association for Computational Linguistics. +Shuang Zeng, Yuting Wu, and Baobao Chang. 2021. SIRE: Separate intra- and inter-sentential reasoning for document-level relation extraction. In *Findings of the Association for Computational Linguistics: ACLIJCNLP* 2021, pages 524–534, Online. Association for Computational Linguistics. +Shuang Zeng, Runxin Xu, Baobao Chang, and Lei Li. 2020. Double graph based reasoning for document-level relation extraction. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1630-1640, Online. Association for Computational Linguistics. +Dongxu Zhang, Sunil Mohan, Michaela Torkar, and Andrew McCallum. 2022. A distant supervision corpus for extracting biomedical relationships between chemicals, diseases and genes. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 1073-1082, Marseille, France. European Language Resources Association. +Ningyu Zhang, Xiang Chen, Xin Xie, Shumin Deng, Chuanqi Tan, Mosha Chen, Fei Huang, Luo Si, and Huajun Chen. 2021. Document-level relation extraction as semantic segmentation. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21, pages 3999-4006. International Joint Conferences on Artificial Intelligence Organization. Main Track. + +Yuhao Zhang, Peng Qi, and Christopher D. Manning. 2018. Graph convolution over pruned dependency trees improves relation extraction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2205-2215, Brussels, Belgium. Association for Computational Linguistics. + +Kang Zhou, Yuepei Li, and Qi Li. 2022. Distantly supervised named entity recognition via confidence-based multi-class positive and unlabeled learning. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7198-7211, Dublin, Ireland. Association for Computational Linguistics. + +Wenxuan Zhou, Kevin Huang, Tengyu Ma, and Jing Huang. 2021. Document-level relation extraction with adaptive thresholding and localized context pooling. Proceedings of the AAAI Conference on Artificial Intelligence, 35(16):14612-14620. + +Yang Zhou and Wee Sun Lee. 2022. None class ranking loss for document-level relation extraction. In Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22, pages 4538-4544. International Joint Conferences on Artificial Intelligence Organization. Main Track. + +# A Appendix + +# A.1 Proof of Theorem 1 + +Proof. Based on the fact that $p_{\mathrm{U}_i}(\pmb{x}) = \pi_{u,i}p_{\mathrm{P}_i}(\pmb{x}) + (1 - \pi_{u,i})p_{\mathrm{N}_i}(\pmb{x}), (1 - \pi_{u,i})\mathbb{E}_{\mathrm{N}_i}[\ell(f_i, -1)]$ can be alternatively expressed as $\mathbb{E}_{\mathrm{U}_i}[\ell(f_i, -1)] - \pi_{u,i}\mathbb{E}_{\mathrm{P}_i}[\ell(f_i, -1)]$ . We can rewrite $R_{\mathrm{ori}}(f)$ as follows: + +$$ +\begin{array}{l} R _ {\mathrm {o r i}} (f) = \sum_ {i = 1} ^ {K} \left(\pi_ {i} \mathbb {E} _ {\mathrm {P} _ {i}} [ \ell (f _ {i}, + 1) ] \right. \\ + \left(1 - \pi_ {i}\right) \mathbb {E} _ {\mathrm {N} _ {i}} [ \ell (f _ {i}, - 1) ]) \\ = \sum_ {i = 1} ^ {K} \left(\pi_ {i} \mathbb {E} _ {\mathrm {P} _ {i}} [ \ell (f _ {i}, + 1) ] \right. \tag {15} \\ + \frac {1 - \pi_ {i}}{1 - \pi_ {u , i}} \left(\mathbb {E} _ {\mathrm {U} _ {i}} [ \ell (f _ {i}, - 1) ] \right. \\ - \pi_ {u, i} \mathbb {E} _ {\mathrm {P} _ {i}} [ \ell (f _ {i}, - 1) ]) \\ = R _ {\mathrm {S} - \mathrm {P U}} (f). \\ \end{array} +$$ + +We conclude that $R_{\mathrm{ori}}(f) = R_{\mathrm{S - PU}}(f)$ . + +# A.2 Proof of Theorem 2 + +Proof. Let $\Delta_i = \mathrm{P}(y_i = 1 \mid x)$ be the marginal probability when the i-th label is positive. The + +conditional risk of $\ell_{\mathrm{SR}}$ is: + +$$ +\begin{array}{l} R _ {\ell_ {\mathrm {S R}}} (\mathrm {P}, \boldsymbol {f}) = \sum_ {i = 1} ^ {K} \left(\Delta_ {i} \frac {1}{4} \left(\left(f _ {i} - f _ {0}\right) - m a r g i n\right) ^ {2} \right. \\ + \left(1 - \Delta_ {i}\right) \frac {1}{4} \left(- \left(f _ {i} - f _ {0}\right) - \text {m a r g i n}) ^ {2}\right). \tag {16} \\ \end{array} +$$ + +For $i = 1,\dots ,K$ , the partial derivative can be computed by + +$$ +\frac {\partial}{f _ {i}} \mathbb {E} [ \ell_ {\mathrm {S R}} (\mathrm {P}, \boldsymbol {f}) \mid \boldsymbol {x} ] = +$$ + +$$ +\sum_ {i = 1} ^ {K} \left(\Delta_ {i} \frac {1}{2} \left(\left(f _ {i} - f _ {0}\right) - m a r g i n\right) + \right. \tag {17} +$$ + +$$ +(1 - \Delta_ {i}) \frac {1}{2} ((f _ {0} - f _ {i}) - m a r g i n)), +$$ + +since $\ell_{\mathrm{SR}}$ is convex and differentiable, we can obtain the optimal $f^{*}$ by setting the partial derivatives to zero, which leads to + +$$ +f _ {i} ^ {*} - f _ {0} ^ {*} = 2 \Delta_ {i} \text {m a r g i n} - \text {m a r g i n}, i = 1, \dots , K. \tag {18} +$$ + +When margin $\neq 0$ , for the optimal score function $f^{*}$ $f_{i}^{*} > f_{0}^{*}$ if and only if $\Delta_i > \frac{1}{2}$ , which minimizes the $\ell_{\mathrm{SR}}$ risk according to Eq.14. Therefore, $\ell_{\mathrm{SR}}$ is Bayes consistent w.r.t. $L_{\mathrm{NA}}$ + +# A.3 Results under the Fully Supervised Setting + +The detailed results under the fully supervised setting are shown in Table 9. We report the mean and standard deviation on the validation and test set by conducting 5 runs with different random seeds (62, 63, 64, 65, 66). + +# A.4 Results under the Extremely Unlabeled Setting + +The detailed results under the extremely unlabeled setting are shown in Table 10. We report the mean and standard deviation on the test set by conducting 5 runs with different random seeds (62, 63, 64, 65, 66). + +# A.5 Details of Other Loss Functions + +We first show the convex loss function log-sigmoid loss, which is commonly used in classification task: + +$$ +\ell_ {L S} \left(f _ {i}, y _ {i}\right) = - \log \left(\sigma \left(y _ {i} f _ {i}\right)\right), \tag {19} +$$ + +where $\sigma (x)$ is the sigmoid function. + +Since log-sigmoid loss is convex and differentiable, we can obtain its none-class ranking form. + +
ModelDevTest
Ign F1F1Ign F1F1
ATLOP+BERT*Base73.12 ± 0.3573.93 ± 0.3872.70 ± 0.2373.47 ± 0.25
SSR-PU+BERT*Base73.27 ± 0.1974.69 ± 0.2072.91 ± 0.2374.33 ± 0.20
ATLOP+RoBERTa*Large76.98 ± 0.2077.68 ± 0.2176.92 ± 0.1577.58 ± 0.16
DocuNET+RoBERTa†Large77.5378.1677.2777.92
KD-DocRE+RoBERTa†Large77.9278.6577.6378.35
SSR-PU+RoBERTaLarge77.44 ± 0.2578.66 ± 0.2377.67 ± 0.2578.86 ± 0.23
+ +Table 9: Results on revised Re-DocRED under the fully supervised setting. Results with $\dagger$ are reported from (Tan et al., 2022b). Results with $*$ are based on our implementation. + +
ModelIgn F1F1PR
ATLOP+BERT*Base16.99 ± 0.2417.01 ± 0.2493.17 ± 0.489.36 ± 0.14
SSR-PU+BERT*Base46.47 ± 0.2147.24 ± 0.2359.52 ± 0.8739.18 ± 0.61
ATLOP+RoBERTa*Large17.29 ± 0.2817.31 ± 0.2894.85 ± 0.199.52 ± 0.17
SSR-PU+RoBERTa*Large48.98 ± 0.3049.74 ± 0.3061.57 ± 1.3441.75 ± 0.42
+ +Table 10: Results on Re-DocRED revised test set under the extremely unlabeled setting. Results with * are based on our implementation. + +
ModelIgn F1F1
SSR-PUmargin=00.180.20
SSR-PUmargin=0.155.7656.81
SSR-PUmargin=0.2555.4356.36
SSR-PUmargin=0.555.2756.19
SSR-PUmargin=1.054.2555.24
+ +Table 11: Results on Re-DocRED revised test set under the BERTBase setting with different margin. + +
ModelF1PR
SSR-PUπi=2πlabeled,i55.4478.9842.71
SSR-PUπi=3πlabeled,i56.3670.5346.93
SSR-PUπi=4πlabeled,i54.7461.4549.35
+ +Table 12: Results on Re-DocRED revised test set under the BERTBase setting with different πi estimation. + +Log-sigmoid ranking loss: + +$$ +\ell_ {L S R} \left(f _ {i}, y _ {i}\right) = - \log \left(\sigma \left(y _ {i} \left(f _ {i} - f _ {0}\right)\right)\right). \tag {20} +$$ + +This ranking loss function remains Bayesian consistent with $L_{\mathrm{NA}}$ (Eq.12). + +# A.6 Sensitivity to Hyper-Parameter margin + +As shown in Table 11, the model fails to train when margin $= 0$ , and the model is insensitive to margin when margin $\neq 0$ . This is consistent with our proof. + +# A.7 Influence of Prior Estimation + +As shown in Table 12, the experimental results with different $\pi_{i}$ show that our method is insensitive to the estimation of $\pi_{i}$ . Smaller estimates of $\pi_{i}$ lead to higher precision rates as well as lower recall rates, while the opposite is true for higher estimates of $\pi_{i}$ . \ No newline at end of file diff --git a/aunifiedpositiveunlabeledlearningframeworkfordocumentlevelrelationextractionwithdifferentlevelsoflabeling/images.zip b/aunifiedpositiveunlabeledlearningframeworkfordocumentlevelrelationextractionwithdifferentlevelsoflabeling/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..b8041ef7661dd0677ff47cbcfb30d31056f58707 --- /dev/null +++ b/aunifiedpositiveunlabeledlearningframeworkfordocumentlevelrelationextractionwithdifferentlevelsoflabeling/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c68e22bb193ab4d03fc6dc17afcf1516fe829028b6ae8c442763bcdfab6b4375 +size 805578 diff --git a/aunifiedpositiveunlabeledlearningframeworkfordocumentlevelrelationextractionwithdifferentlevelsoflabeling/layout.json b/aunifiedpositiveunlabeledlearningframeworkfordocumentlevelrelationextractionwithdifferentlevelsoflabeling/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..885e19911a94ac63c1c3a6576168acf6bd9622bf --- /dev/null +++ b/aunifiedpositiveunlabeledlearningframeworkfordocumentlevelrelationextractionwithdifferentlevelsoflabeling/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:41eab7c4f5e169b4fc27bbfdf2b514ce8108213fe2dda1e7bf71dc6f020ee45b +size 469227 diff --git a/axmabsaaframeworkforextremelyweaklysupervisedmultilabelaspectbasedsentimentanalysis/5e0e174e-1b80-4d82-b439-086b6eed4662_content_list.json b/axmabsaaframeworkforextremelyweaklysupervisedmultilabelaspectbasedsentimentanalysis/5e0e174e-1b80-4d82-b439-086b6eed4662_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..66045bb42c58ea95f1a04b78382b29a6bd3dc56f --- /dev/null +++ b/axmabsaaframeworkforextremelyweaklysupervisedmultilabelaspectbasedsentimentanalysis/5e0e174e-1b80-4d82-b439-086b6eed4662_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7efc72ce7dd7abd908e1b9d55a4d244f1e10a11a23403d2cc56887e0b3651b01 +size 80883 diff --git a/axmabsaaframeworkforextremelyweaklysupervisedmultilabelaspectbasedsentimentanalysis/5e0e174e-1b80-4d82-b439-086b6eed4662_model.json b/axmabsaaframeworkforextremelyweaklysupervisedmultilabelaspectbasedsentimentanalysis/5e0e174e-1b80-4d82-b439-086b6eed4662_model.json new file mode 100644 index 0000000000000000000000000000000000000000..4efe8099b81d59af20b83ca9dcdafcc0976b7b26 --- /dev/null +++ b/axmabsaaframeworkforextremelyweaklysupervisedmultilabelaspectbasedsentimentanalysis/5e0e174e-1b80-4d82-b439-086b6eed4662_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ec687a54c8c4bcaa6af47d2612091023e3b41a53894aa1edfa768978c2e97e40 +size 101399 diff --git a/axmabsaaframeworkforextremelyweaklysupervisedmultilabelaspectbasedsentimentanalysis/5e0e174e-1b80-4d82-b439-086b6eed4662_origin.pdf b/axmabsaaframeworkforextremelyweaklysupervisedmultilabelaspectbasedsentimentanalysis/5e0e174e-1b80-4d82-b439-086b6eed4662_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..f938e0534271a3dd5d7ef51f7a437212ae234a74 --- /dev/null +++ b/axmabsaaframeworkforextremelyweaklysupervisedmultilabelaspectbasedsentimentanalysis/5e0e174e-1b80-4d82-b439-086b6eed4662_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:00e7726a0847fb4e7449afa534e67c8d3dc30de16a72e08c0d72e7875b017649 +size 542550 diff --git a/axmabsaaframeworkforextremelyweaklysupervisedmultilabelaspectbasedsentimentanalysis/full.md b/axmabsaaframeworkforextremelyweaklysupervisedmultilabelaspectbasedsentimentanalysis/full.md new file mode 100644 index 0000000000000000000000000000000000000000..14508c44731c06af517f6a7312a8bc38d8ccce9e --- /dev/null +++ b/axmabsaaframeworkforextremelyweaklysupervisedmultilabelaspectbasedsentimentanalysis/full.md @@ -0,0 +1,309 @@ +# AX-MABSA: A Framework for Extremely Weakly Supervised Multi-label Aspect Based Sentiment Analysis + +Sabyasachi Kamila1, Walid Magdy1, Sourav Dutta2 and MingXue Wang2 + +$^{1}$ School of Informatics, University of Edinburgh + +$^{2}$ Huawei Research Centre, Dublin, Ireland + +{skamila,wmagdy}@inf.ed.ac.uk,{sourav.dutta2,wangmingxue1}@huawei.com + +# Abstract + +Aspect Based Sentiment Analysis is a dominant research area with potential applications in social media analytics, business, finance, and health. Prior works in this area are primarily based on supervised methods, with a few techniques using weak supervision limited to predicting a single aspect category per review sentence. In this paper, we present an extremely weakly supervised multi-label Aspect Category Sentiment Analysis framework which does not use any labelled data. We only rely on a single word per class as an initial indicative information. We further propose an automatic word selection technique to choose these seed categories and sentiment words. We explore unsupervised language model post-training to improve the overall performance, and propose a multi-label generator model to generate multiple aspect category-sentiment pairs per review sentence. Experiments conducted on four benchmark datasets showcase our method to outperform other weakly supervised baselines by a significant margin. $^{1}$ + +# 1 Introduction + +Aspect-based sentiment analysis (ABSA) is a well-known sentiment analysis task which provides more fine-grained information than simple sentiment understanding (Liu, 2012). The main goal of ABSA is to find the aspects and its associated sentiment within a given text. While the works on ABSA have expanded in different directions, it has primarily two sub-tasks, Aspect Term Sentiment Analysis (ATSA) and Aspect Category Sentiment Analysis (ACSA) (Xue and Li, 2018). ATSA consists of different tasks like aspect term extraction (Li et al., 2018; Luo et al., 2019; Li et al., 2020a; Shi et al., 2021), aspect term sentiment classification (He et al., 2018; Chen and Qian, 2019; Hou et al., 2021), opinion term extraction (Dai and Song, + +2019; He et al., 2019; Chen and Qian, 2020b), aspect-oriented opinion term extraction (Fan et al., 2019; Wu et al., 2020a), aspect-opinion pair extraction (Zhao et al., 2020), etc. For example, in the sentence "The sushi is top-notch, the waiter is attentive, but the atmosphere is dull.", ATSA would extract the aspect terms 'sushi', 'waiter' and 'atmosphere'; opinion terms 'top-notch', 'attentive', and 'dull'; and their associated sentiments 'positive', 'positive' and 'negative'. The other sub-task ACSA aims to find the higher order aspect categories and its associated sentiment from a given text. In the above example, ACSA would detect the categories as 'food' (as 'pasta' is a type of 'food'), 'service' and 'ambience'; and the associated sentiments as 'positive', 'positive' and 'negative'. + +Existing research on ABSA is dominated by supervised methods, where labeled training data is provided (Chen et al., 2017; Xue and Li, 2018; Cai et al., 2021; Liu et al., 2021; Xu et al., 2021; Yan et al., 2021). A few works try to solve the problem in a weakly/semi-supervised manner, where a few labelled samples are provided (Wang et al., 2021a). However, there has been a lack of study on ABSA using unsupervised methods, i.e., without using any labelled data. A few works also focused on unsupervised aspect term extraction (Shi et al., 2021). However, such works do not deal with the sentiment associated with the aspects. An existing work on weakly supervised ACSA (Huang et al., 2020) only considered a single aspect category per sentence – thus limiting the task to a larger extent. + +Motivated by the above, in this work, we present a methodology for extremely weakly supervised ACSA task, where we do not need any labelled training samples. We solve both aspect category detection (ACD) and ACSA tasks (on each review sentence) just by using the surface text of aspect category and sentiment. Given $N$ review sentences, $C$ categories of interest and $P$ polarities of interest, the ACD task generates $C$ clusters, while the + +ACSA task generates $(c_i, p_j)$ tuples where $c_i \in C$ and $p_j \in P$ . As in (Wang et al., 2021b), we adopt the representation learning perspective, wherein representing sentences by class names leads to better clustering. We only use the surface text of the class names and unlabelled sentences to get aspect category and sentiment clusters. + +However, in clustering, each review sentence would get only one label, thus limiting the task by a substantial extent. To tackle this, we propose X-MABSA, a multi-label generator model which makes use of dependency parser (Qi et al., 2020) and a similarity-based attention mechanism to generate multiple categories and associated sentiment polarity labels for each review sentence. In addition, we find that sometimes the representative text of aspect categories (provided as input) is not present (or sparse) in the text corpus. This might lead to skewed representation of the classes in our framework and thus degrade performance. Therefore, we present an automatic surface word selection strategy which would represent the class names better. We combine this with our X-MABSA model and denote it as AX-MABSA. + +We also showcase that unsupervised posttraining of language model on domain specific data significantly improves the sentence representation and thus achieves better results for ACSA tasks. For this, we post-train BERT language model (Devlin et al., 2019) using domain specific unlabelled data. We perform experiments on four different benchmark aspect-based datasets (Pontiki et al., 2014, 2015, 2016; Cheng et al., 2017), and compare with different supervised and weakly supervised baselines. Our main contributions are as follows: + +- an extremely weakly supervised method to solve the ACSA task without relying on any labelled data, and using only the class names as the only provided information; +- an automatic surface word selection strategy for choosing a suitable word corresponding to each aspect and sentiment class; +- use of BERT language model post-training on domain specific unlabelled data for semantic representation of review sentences; +- a multi-label generator model which makes use of a dependency parser and a similarity-based attention mechanism for generating + +multiple aspect-sentiment labels for each sentence; and + +- experimental results comparing our architecture with different existing baselines on four benchmark aspect datasets. + +# 2 Related Work + +Aspect Based Sentiment Analysis (ABSA) has gained significant attention for a long time, and research has been done in primarily two directions - Aspect Term Sentiment Analysis (ATSA) and Aspect Category Sentiment Analysis (ACSA). + +# 2.1 Aspect Term Sentiment Analysis + +Research on ATSA has been in different subcategories like, + +Aspect Term Extraction In this sub-task, aspect terms associated with a category are extracted from a given text. Prior research on this is based on sequence labelling problem (Ma et al., 2019; Li et al., 2020a). Li and Lam (2017) proposed a neural network-based deep multi-task framework with memory network for extracting aspect terms. Xu et al. (2018) presented a double embedding method which uses CNN (LeCun et al., 1995)-based sequence tagging, while Li et al. (2018) considered summary of opinions expressed in text as well as the history of aspect detection for effective aspect term extraction. Chen and Qian (2020a) proposed a soft prototype-based approach with aspect word correlations to improve quality. A few unsupervised methods have tried to improve performance by using traditional topic modelling-based models. Luo et al. (2019) proposed a neural network based unsupervised model which takes sememes for better lexical semantics. Shi et al. (2021) presented a self-supervised method which works on learning aspect embedding on the word embedding space for aspect extraction. + +Aspect-level Sentiment Classification In this sub-task, sentiment labels are assigned to each aspect term. Wang et al. (2016); Liu and Zhang (2017); Ma et al. (2017) proposed an attention-based neural network model for aspect-level sentiment classification (ASC). Tay et al. (2018) modelled relationship between words and aspects using LSTM model (Hochreiter and Schmidhuber, 1997) to improve ASC performance. He et al. (2018) showed that document knowledge transfer improved performance of ASC task. Chen and + +Qian (2019) proposed a transfer capsule network for transferring knowledge from document-level sentiment classification, while Hou et al. (2021) adopted a dependency tree-based graph neural network to solve the ASC task. + +Aspect-oriented Opinion Extraction This task extracts opinion terms associated with aspect terms. Fan et al. (2019) designed a sequence label model which used LSTM (Hochreiter and Schmidhuber, 1997) for aspect-oriented opinion extraction (AOE). Wu et al. (2020a) proposed a tagging scheme for AOE task which uses CNN (LeCun et al., 1995), LSTM (Hochreiter and Schmidhuber, 1997) and BERT (Devlin et al., 2019) for opinion extraction. Wu et al. (2020b) proposed a transfer learning method for transferring knowledge from sentiment classification task to AOE task. + +Recent works on ATSA have introduced more sub-tasks like aspect-opinion pair extraction, aspect-sentiment-opinion triplet extraction, aspect-category-opinion-sentiment quadruple extraction, etc. Yan et al. (2021) proposed a BART (Lewis et al., 2020)-based model to solve all ATSA tasks. Cai et al. (2021) introduced a new task called, aspect-category-opinion-sentiment quadruple extraction, a BERT (Devlin et al., 2019)-based model to deal with implicit aspects and opinion terms. Xu et al. (2021) proposed a new span-level method for the aspect-sentiment-opinion triplet extraction. + +# 2.2 Aspect Category Sentiment Analysis + +Aspect Category Sentiment Analysis (ACSA) finds aspect categories and their associated sentiments from a text. Research on this has been conducted on both Aspect Category Detection (ACD) and ACSA tasks. Ma et al. (2018) proposed a word attention-based hierarchical model which takes common-sense knowledge for solving ACSA task. Xue and Li (2018) presented a novel CNN (LeCun et al., 1995)-based model for ACSA task. Liang et al. (2019) proposed an encoding scheme which was aspect-guided and able to perform aspect-reconstruction. Sun et al. (2019) constructed an auxiliary text for aspects and reformed the ACSA as a classification task. + +Wang et al. (2020) proposed a novel dependency tree-based model and a relational graph attention network for encoding the sentences. Li et al. (2020b) designed a multi-instance framework for multi-label ACSA task. Cai et al. (2020) reformed the task as sentiment-category with a two-layer + +hierarchy where the higher layer detected the sentiment while the lower layer detected the aspect category. Liang et al. (2021) presented a semi-supervised framework having a beta distribution-based model. The model finds semantically related words from the context of a target aspect. Liu et al. (2021) solved the ACSA task as a text generative method using BART (Lewis et al., 2020). Zhang et al. (2021) presented aspect sentiment quad prediction task where ACSA was formulated as a paraphrase generation task. + +Almost all existing works on ACSA are based on supervised methods. In contrast, this work proposes a method for ACSA which does not require any labelled data and relies only on seed text for aspect class names. + +# 3 Proposed Methodology + +Our proposed method, AX-MABSA, works on the following components: (a) class name-based clustering, (b) unsupervised language model posttraining on domain-specific data for better contextual representation of review sentences, (c) a multi-label generator model to generate multiple categories and associated sentiment labels, and (d) automatic class-representative text selection. The overall framework is depicted in Figure 1. + +Problem Formulation: We formulate the extremely weakly supervised ACD and ACSA tasks as: Consider as input a review sentence $x = \{x_{1},x_{2},x_{3},\dots \dots ,x_{n}\}$ where $x_{i}$ is the $i^{th}$ word of the sentence and $n$ is the length of the sentence, along with a list of $C$ predefined aspect categories. The output for the ACD task is $c$ categories for a sentence where $c\subset C$ . For the ACSA task, the output is a list of tuples $(c_{j},p_{k})$ where $c_{j}$ is the $j^{th}$ predicted category and $p_{k}$ is the $k^{th}$ predicted sentiment polarity corresponding to the category $c_{j}$ . The sentiment polarity $p$ is from the set $s\subset \{\text{positive, negative}\}$ . + +# 3.1 ACSA Module + +As a primary task, we address the aspect detection based on the seed aspect categories provided as input. We adopt the X-Class model as presented in (Wang et al., 2021b) for solving extremely weakly supervised classification tasks majorly on topic modelling datasets. This module involves four stages: (a) word representations, (b) class representation, (c) class-specific document representation, and (d) document-class alignment. + +![](images/4f814e7a389e0f658142103cffb33be164fa166658a4d521b7236ec864884317.jpg) + +![](images/601ebe544c057cfe1dddedc3517d8ecefd46e2b29e23ab48f85db83236a6ebef.jpg) + +![](images/9cfb80dd90e30497abfbc48ca7a2a15e4ebe305156c063d2b4e41298bccf2d3e.jpg) +Figure 1: Overview of the proposed AX-MABSA Framework + +For word representations, at first, a vocabulary is created from all the input texts. Then, each word's contextual representation is represented using a pretrained BERT language model (Devlin et al., 2019). The contextual embeddings of each word are averaged to obtain the review sentence encoding, and this representation is denoted as the static word representation $s_r$ . + +$$ +s _ {r} = \frac {\sum_ {R _ {i , j} = r} z _ {i , j}}{\sum_ {R _ {i , j} = r} 1} \tag {1} +$$ + +Here, $z_{i,j}$ is the contextualized representation and $R_{i,j}$ is the $j^{th}$ word in the review sentence $R_i$ . + +As class representation, the representations of the aspect class names are constructed based on the static representations of those words. For example, the category "sports" is represented by the contextual embedding of the word "sports". Then an expansion technique is used to find similar words of each class name words from within the input texts and average those words' contextual representations to obtain the final aspect class embedding. + +In class-specific document representation, the representations of the documents or the sentences are guided by the class representations so that the sentences become more aligned to the topics of interest, i.e., the class names. Different attention mechanisms are used over the document representations guided by class representations to get updated document representations. Finally, for document- + +class alignment, clustering algorithms are used to cluster n-documents to c-clusters ( $c$ is the number of classes), wherein the seed class centroids are initialized with the class representations. + +Clustering Algorithms: We followed different centroid-based clustering algorithms such as K-Means (Lloyd, 1982), Mini-batch K-Means (Sculley, 2010) and Gaussian Mixture Model (GMM) (Duda et al., 1973); and found that in-general Minibatch K-Means (mk-means) performs best for the ACD task while GMM performs best for the ACSA task. So, we fix this for our experiments. We used Principal component analysis (Abdi and Williams, 2010) for dimensionality reduction of sentence representation and class representation vectors before clustering. The target dimension is set to 64. We also fixed random_state to 42 for centroid initialization. For mk-means, we used batch size 400. + +The model requires the surface text of the class names to be present on the dataset for a certain number of times. We feel this is a potential drawback in solving our ACSA task, as some surface text of category names may not be present in the dataset or have a proper meaning representation. For example, the category word "miscellaneous" might have no clear meaning and sometimes might not be present in the dataset. To resolve this issue, we explicitly add the category name to the vocabulary set if it is not found on the dataset. Another drawback of the above approach is that it can only + +predict one label per sentence. This is a huge limitation, especially when multiple aspect categories are present in a review sentence. In the following sections, we tackle these issues to propose a robust multi-aspect extraction framework. + +# 3.2 AX-SABSA + +We observed that the performance of the implemented ACSA module based on X-Class is poor. One of the reasons is that the words' representations are based on the pre-trained BERT language model (Devlin et al., 2019) which gives more general representations of each word, which works well for topic modelling tasks. However, the aspect terms are more specific to the domains and thus general representation does not provide specific information. Therefore, we suggest that unsupervised post-training of BERT on domain-specific data would lead to better word-representations and thus a better performance. + +Unsupervised Post-training of BERT Language Model (UPBERT): We follow a recent model (Gao et al., 2021) which feeds the same input twice, one with dropout masks and the other with different dropout masks, to the encoder. The model optimizes the following objective function: + +$$ +z _ {i} = - \log \frac {e ^ {\operatorname {s i m} \left(a _ {i} ^ {p _ {i}} , a _ {i} ^ {p _ {i} ^ {\prime}}\right) / \gamma}}{\sum_ {j = 1} ^ {N} e ^ {\operatorname {s i m} \left(a _ {i} ^ {p _ {i}} , a _ {j} ^ {p _ {j} ^ {\prime}}\right) / \gamma}} \tag {2} +$$ + +Here, $a$ is hidden state, which is a function of the input sentence and dropout masks $p$ and $p'$ . + +We feed our collected domain specific unlabeled data of varied sizes to this representation and fine-tune the BERT model. For our experiments, we use batch size as 128, sequence length as 32, learning rate as 3e-5, loss function as Multiple Negatives Ranking Loss of the sentence-transformer model (Reimers and Gurevych, 2019). We vary the dataset size starting from 10k samples for training, and get different fine-tuned BERT models corresponding to different data sizes. Finally, we select the fine-tuned model which provides the best performance (using 80k unlabeled training sentences in our case) and apply this UPBERT model for word representation to solve the ACSA task and call our single-label predictor model X-SABSA. + +Automatic Class-representative Surface Text Selection Algorithm (ACSSA): Our model suffers when the surface text of class names is not present + +Algorithm 1 Algorithm for Automatic Class-representative Surface Text Selection + +Input: $X$ (noun for ACD, adjective for ACSA), dataset $D$ , vocabulary $V$ , class names $C$ + +Output: A List selected containing Candidate words for each class + +Initialize a global array uV[, $T$ , targetL[, sourceL[, interL[, goalL[, selected[ + +![](images/33d53c3198da734d0a02e7f4f62aae33b5a9bc0c15a14b9e19e683360e485a0f.jpg) + +on the data. Although we add these words to the vocabulary explicitly, their contextual representations become poor. As an immediate solution, we can manually select candidate words corresponding to class names. However, this would be a difficult and tedious job when the number of categories would be high. Also, there can be multiple candidate words for a class name. For example, to represent the category "ambience", one can choose any of the following words: atmosphere, environment, vibes, etc. Similarly, to represent a negative polarity, one can choose any of the following words: bad, problem, pathetic, poor, etc. Depending upon the words we choose, the overall performances vary significantly. So, we propose an algorithm that selects these candidate words automatically given the original class names (see Algorithm 1). + +The algorithm ACSSA takes a particular part-of-speech tag (noun for the ACD task, adjective for the ACSA task), dataset, vocabulary and the class names as input and produces a candidate word list as output. Initially, it creates a list $uV$ of words from the vocabulary which has a desired part-of + +speech tag. It then finds all the similar words for each class name from the list $uV$ . We then select top- $T$ values from each list. This can be varied depending upon inspection. We fixed it to 10 based on experimental results. Then the similar words for each class are sorted according to the cosine similarity scores. Finally, we sort each list according to their number of occurrences in the dataset. We then select the topmost occurring word from each list as the aspect class representative. This would produce a single candidate word for each class. Thus, the AX-SABSA module uses ACSSA in combination with X-SABSA, to automatically generate better aspect category names. + +# 3.3 AX-MABSA + +Since clustering produces only one label for each review sentence, we propose a Multi-label Generator model based on dependency parser (Qi et al., 2020) and a similarity-based attention mechanism. + +Multi-label Generator Model: This model takes the unlabelled sentences, the sentence representations, the category class representations, and the clustering outputs to generate multiple categories and associated sentiment polarity for each sentence. We illustrate the model using the following example: "The food was good, but it's not worth the wait or the lousy service". The sentence has tags ' (food, positive)' and ' (service, negative)'. + +Parsing the Input Sentences The unlabelled input sentences are parsed by off-the-shelf dependency parser (Qi et al., 2020). The parser outputs a pair of dependencies (word, word[head-1]). The output of the above sentence can be seen in Figure 2. For each word with Noun part-of-speech tag in the sentence, we select those pairs where either the word or word[head-1] is also a Noun. We call these final set of pairs as 'PPairs'. The 'PPairs' for the above sentence are ('food', 'good'), ('wait', 'worth'), and ('service', 'wait'). Observe, in general, the first word in a pair is related to aspects while the second word is associated with sentiment. + +Similarity-based Attention We use a similarity-based attention mechanism to assign a desired class label to each of the words in the PPairs. We first obtain the similarity values between the words in the sentence and all the class names using the cosine similarity as: $S_{i,j} = \cos (w_i, c_j)$ . + +Now, we calculate $\max_c(S)$ which assigns each word to the highest similar class. For each aspect word in the 'PPairs' if the corresponding + +
DatasetRest-14Rest-15Rest-16MAMS
# of Categories5558
# of Sentences800582586400
Avg # of Aspects/sentence1.281.211.182.25
Imbalance5.0412.107.349.09
+ +Table 1: Gold Data Statistics. Imbalance value signifies the ratio between the largest and smallest category size. + +$max_{c}(S)$ is greater than a threshold $^{2}$ then we keep those 'PPairs'. We call these filtered pairs as 'FP-Pairs'. Finally, we assign the aspect, sentiment label to each 'FPPairs' based on its corresponding $max_{c}(S)$ values. If the 'FPPairs' has only one or empty pair, then we consider the clustering outputs as the predicted aspect and sentiment pair. + +In the entire setup, we use the UPBERT model mentioned in Section 3.2 for word representation. We refer to the entire model as X-MABSA. When we use the automatic surface word selection algorithm ACSSA in the X-MABSA model, we call that final model AX-MABSA. + +# 4 Experimental Setup + +We discuss here the datasets we have used, word representations, and different baselines we have selected for our experiments. + +# 4.1 Datasets + +We chose the SemEval-2014 restaurant review (Rest-14) (Pontiki et al., 2014), SemEval-2015 restaurant review (Rest-15) (Pontiki et al., 2015), SemEval-2016 restaurant review (Rest-16) and the multi-aspect multi-sentiment (MAMS) (Cheng et al., 2017) datasets for sentence-level aspect category and aspect category sentiment. The Rest-14 data has five categories as food, service, ambience, price, and miscellaneous. Rest-15 and Rest-16 have restaurant, ambience, food, service, and drinks categories. MAMS dataset has food, ambience, price, service, miscellaneous, staff, menu, and place categories. The test data size for all the dataset is reported in Table 1. Imbalance signifies the ratio between the largest class size and smallest class size. + +Data for BERT Post-training For BERT posttraining, we consider the Citysearch data created + +![](images/da3e5cf6fb6d6a9f981163411c1841f9bfda56a73cdd7d0e28b7e599bdc8f0ac.jpg) +Figure 2: Sample Dependency Parser Output + +by, Ganu et al. (2009) which contains $\approx 2.8$ million unlabelled restaurant reviews. + +# 4.2 Word Representations + +We consider a pre-trained language model called BERT (Devlin et al., 2019) (in particular we chose the 'bert-base-uncased' model which has 110M parameters). BERT follows a transformer model (Vaswani et al., 2017) for its representation, where the model predicts the masked words using the surrounding context words. We obtain vector representation for each word of a given sentence using BERT language model. We use BERT for both word representations and the post-training tasks. + +# 4.3 Baselines + +We compare the performance of the proposed model with diverse types of baselines such as random, supervised, and weakly supervised methods. + +- Random: At first, we present a random baseline where the predictions are generated using a uniform distribution. This will provide us with a lower bound for our evaluation. +- Supervised: A recent supervised method, ACSA-generation (Liu et al., 2021) solves the ACSA as a generation task. The training and test set are structured with some predefined templates. Finally, the authors used BART (Lewis et al., 2020), a denoising autoencoder, for generating the desired outputs. This will give us an approximate upper bound for our evaluation. +- Weakly Supervised: A weakly supervised method, JAsen (Huang et al., 2020) takes unlabelled training reviews and a few keywords corresponding to each aspect categories and sentiment polarity and outputs an (aspect, sentiment) pair for each review. The authors only considered the sentences with single aspect category. +- Extremely Weakly Supervised: The method X-Class (Wang et al., 2021b) takes reviews + +and a single keyword per class name as inputs and predicts a single class for each review. The method was validated majorly on different topic modelling datasets. + +# 5 Experimental Evaluation + +In this section, we study the performance of the different algorithms on four datasets, compare them with different baselines, and discuss the qualitative analysis of our model performance. + +# 5.1 Evaluation Framework + +We evaluate our method in an End-to-End framework. The popularly used ABSA evaluation uses gold aspects as a part of input to predict the sentiment polarity of each gold aspect. However, when the task is unsupervised (almost), we do not expect to know the aspect categories beforehand, as has been explored in previous works involving sentiment mining alone. Thus, we follow the End-to-End framework, which has two stages. In the first stage, given sentences, all the aspects are predicted. In the second stage, for each predicted aspect in the first stage, the corresponding sentiment polarity is predicted. Therefore, in our case, the first-stage output is the ACD output, which outputs aspect categories corresponding to each sentence. The second stage output is the ACSA output, which is a set of tuples consisting of (aspect category, sentiment polarity) pairs for each sentence. Therefore, if both the aspect category and sentiment polarity are predicted correctly then only, we consider it as a correct prediction. Thus, the performance is measured over all tuples (aspect, sentiment) in the gold data. + +# 5.2 Evaluation Metrics + +We consider two metrics for performance evaluation. For the ACD task, we report macro-averaged F1 score (or F1-macro) which is the average of F1-scores per class. For the ACSA task, we report macro-averaged F1-PN score (or macro F1-PN) + +
Supervision TypeMethodsACDACSA
Rest-14Rest-15Rest-16MAMSRest-14Rest-15Rest-16MAMS
BaselinesRandom22.5021.1219.0316.4508.4008.4607.1605.39
SupervisedACSA-Generation91.4183.5687.1189.2378.4371.9173.7670.30
Weakly SupervisedJASen42.2733.2943.4321.5726.6219.4423.2314.74
Extremely +Weakly SupervisedX-Class46.6940.3536.5836.5234.4425.4924.8316.32
ProposedExtremely +Weakly SupervisedX-SABSA56.1658.8742.7737.7239.6642.5531.4619.60
AX-SABSA69.5756.1745.6939.3344.1440.2432.2318.55
X-MABSA61.7362.0749.0256.4844.9644.3535.8127.28
AX-MABSA74.9060.0850.6360.8249.6842.7436.4729.74
+ +Table 2: Comparative Results for the ACD and End-to-End ACSA tasks. We report F1-macro score for ACD and F1-PN macro score for ACSA. X-SABSA: Proposed single label predictor model. AX-SABSA: Proposed single label predictor model where the candidate word for each class is also updated. X-MABSA: Proposed multi-label predictor model. AX-MABSA: Proposed multi-label predictor model where the candidate word for each class is also updated. Clustering algorithm used: mini batch k-means for ACD, and gmm for ACSA. + +which is the mean of F1-scores of all aspect category, sentiment (positive, negative) pair tuples. The macro F1-PN is commonly used in different SemEval tasks (Pontiki et al., 2016). + +# 5.3 Empirical Results + +Comparative results of the ACD and ACSA tasks on different datasets are presented in Table 2. The results show that we achieve far better performance than random baselines given that our approach is unsupervised. The improvement of our multi-label models (X-MABSA and AX-MABSA) is statistically significant at $p < 0.01$ using paired t-test (Hsu and Lachenbruch, 2014) compared to proposed single label models (X-SABSA and AX-SABSA) and weakly supervised baselines (X-Class, and JAsen). + +For the ACD task, we achieve baseline results for all the datasets (ACSA module). We obtain F1-macro of 46.69, 40.35, 36.58, and 36.52 on Rest-14, Rest-15, Rest-16, and MAMS dataset, respectively. The proposed X-SABSA model improves the performance significantly on all the datasets (F1-macro of 56.16, 58.87, 42.77, and 37.72 on Rest-14, Rest-15, Rest-16, and MAMS data, respectively). Within our proposed models, we find that our multi-label model X-MABSA performs better than single-label model X-SABSA on all datasets. Especially, on the MAMS dataset, it improves the performance significantly (F1-macro of 56.48). We also observe that the AX-MABSA model (i.e., when automatically selected candidate words are considered for class representation) further improves performance on Rest-14, Rest-16, and MAMS datasets (F1-macro of 74.90, 50.63, and 60.82). It shows that the AX-MABSA model is more generalized and works very well when class names are not present in the input data. + +As the ACSA task is framed as an End-to-End + +pipeline, we expect the performance to be lower than the often-used ACSA evaluation procedure. We achieve the baseline results (ACSA module) which are F1-PN-macro scores of 34.44, 25.49, 24.83, and 16.32 on Rest-14, Rest-15, Rest-16, and MAMS, respectively. We find that the proposed X-SABSA model improves the performance significantly over the baseline (F1-PN-macro of 39.66, 42.55, and 31.46 on Rest-14, Rest-15, and Rest-16, respectively). The multi-label model, X-MABSA improves the results further (F1-PN-macro of 44.96, 44.35, 35.81, and 27.28, respectively). We also observe that the AX-MABSA model improves the performance on Rest-14, Rest-16, and MAMS data (F1-PN-macro of 49.68, 36.47, and 29.74, respectively). + +We observe that our proposed model performs significantly better than the random, and two weakly supervised baselines (X-Class and JASen) on both ACD and ACSA tasks. As our method is an extremely weakly supervised method, we do not expect our model to be better than the supervised model. However, in comparison to the supervised model (ACSA-generation), our method shows promising performance. For example, on the Rest-14 data, the supervised model achieves an F1-macro of 91.42 while our proposed model achieves an F1-macro of 74.90 for the ACD task. For the ACSA task, the proposed method performs decently compared to the supervised baseline. For example, on Rest-15, the supervised method achieves an F1-PN-macro of 71.91 while our method achieves an F1-PN-macro of 44.35. + +It is evident that our proposed method works comparatively poorly for ACSA task on the MAMS data. The reason for this is the presence of a remarkably high number of 'neutral' classes (43.62% of total polarity labels). Selecting a single repre + +
ReviewActualPredicted
The sashimi is always fresh and the rolls are innovative and delicious.(food, positive)(food, positive)
While there's a decent menu, it shouldn't take ten minutes to get your drinks and 45 for a dessert pizza.(food, positive), (service, negative)(food, positive), (service, positive)
Who can't decide on a single dish, the tapas menu allowed me to express my true culinary self.(food, negative), (menu, positive)/menu, negative)
Roof: very nice space (although I know 5 other rooftop bars just as good), but the crowd was bunch of posers and the owner was a tool.(place, positive), (miscellaneous, neutral)(place, positive), (ambience, negative)
Endless fun, awesome music, great staff!(service, positive), (ambience, positive), (restaurant, positive)(service, positive), (ambience, positive)
+ +Table 3: Illustration of the proposed method using few examples + +sentative surface word for 'neutral' class is difficult as there is no association between any word and neutral sentences as compared to the 'positive' and 'negative' class. For example, the word 'bad' can be a representative of 'negative' class and the word 'good' can be the same of 'positive' class, but we found no such representative word for neutral class to perform well. + +# 5.4 Performance Analysis + +We report a few example texts with original and our model predicted tags in Table 3. We find that in some cases our model combines two closely related categories to one. For example, the text "who can't decide on a single dish, the tapas menu allowed me to express my true culinary self." has gold category as food and menu. Our model predicts it as menu. The reason is that both the words 'dish', and 'menu' got higher similarity score to category 'menu' which is reasonable. + +The fourth sentence in the Table 3 has gold labels as ' (place, positive)' and ' (miscellaneous, neutral)'. Our model predicts as ' (place, positive)' and ' (ambience, negative)'. We see here that the 'miscellaneous' class has been miss-classified into ' ambience' and 'neutral' to 'negative'. The 'miscellaneous' class is difficult to represent, even if we replace this word by the automatic surface word selection algorithm. Also, from the sentence, we can sense that the 'ambience' can be a class with 'negative' polarity. + +The fifth sentence in Table 3 has gold labels as 'service', 'ambience' and 'restaurant'. However, our model predicts it as 'service', and 'ambience' missing the 'restaurant' category. This happened as in the sentence, there is no explicit presence of restaurant related words. Another point is that there are some mutual words related to both 'ambience' and 'restaurant', such as the word 'place' can be related to both 'ambience' and 'restaurant'. + +# 6 Conclusion + +In this paper, we studied extremely weakly supervised aspect category sentiment analysis across four benchmark datasets, and presented the state-of-the-art unsupervised framework without the requirement of any labelled data. Our method relied only on the surface text of aspect class names and unlabelled texts to extract aspect-sentiment pairs via a multi-label generator model. We proposed an automatic class-representative surface word selection algorithm to select proper representative words corresponding to each class. We also found that unsupervised post-training of language models on domain-specific data improved the word-representations and thus improved the performance. Experiments show that our proposed method performs better than all weakly supervised baseline models. In the future, we intend to improve our methods to incorporate more sentiment classes. We believe that our work would foster more research interest towards unsupervised ABSA. + +# 7 Limitations + +The main limitation of the proposed work is that it is unable to model the "neutral" sentiment class, and performs significantly lower when the number of neutral sentiment reviews are high in a dataset. This is evident from ACSA results on the MAMS data (in Table 2), where the number of neutral classes is high. We have also tried with some possible neutral class related seed category words like 'okay', 'moderate', 'average', etc. but the performance did not improve. It shows that these words can not represent the 'neutral' class. Thus, modelling the 'neutral' class efficiently will improve the model performance. Although our model performs better than other weakly supervised baselines, there is enough scope for improvement to bridge the gap between the supervised methodologies. + +# References + +Herve Abdi and Lynne J Williams. 2010. Principal component analysis. Wiley interdisciplinary reviews: computational statistics, 2(4):433-459. +Hongjie Cai, Yaofeng Tu, Xiangsheng Zhou, Jianfei Yu, and Rui Xia. 2020. Aspect-category based sentiment analysis with hierarchical graph convolutional network. In Proceedings of the 28th International Conference on Computational Linguistics, pages 833-843. +Hongjie Cai, Rui Xia, and Jianfei Yu. 2021. Aspect-category-opinion-sentiment quadruple extraction with implicit aspects and opinions. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 340-350. +Peng Chen, Zhongqian Sun, Lidong Bing, and Wei Yang. 2017. Recurrent attention network on memory for aspect sentiment analysis. In Proceedings of the 2017 conference on empirical methods in natural language processing, pages 452-461. +Zhuang Chen and Tieyun Qian. 2019. Transfer capsule network for aspect level sentiment classification. In Proceedings of the 57th annual meeting of the association for computational linguistics, pages 547-556. +Zhuang Chen and Tieyun Qian. 2020a. Enhancing aspect term extraction with soft prototypes. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2107-2117. +Zhuang Chen and Tieyun Qian. 2020b. Relation-aware collaborative learning for unified aspect-based sentiment analysis. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3685-3694. +Jiajun Cheng, Shenglin Zhao, Jiani Zhang, Irwin King, Xin Zhang, and Hui Wang. 2017. Aspect-level sentiment classification with heat (hierarchical attention) network. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, pages 97-106. +Hongliang Dai and Yangqiu Song. 2019. Neural aspect and opinion term extraction with mined rules as weak supervision. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5268-5277. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186. + +Richard O Duda, Peter E Hart, and David G Stork. 1973. Pattern classification and scene analysis, volume 3. Wiley New York. +Zhifang Fan, Zhen Wu, Xinyu Dai, Shujian Huang, and Jiajun Chen. 2019. Target-oriented opinion words extraction with target-fused neural sequence labeling. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2509-2518. +Gayatree Ganu, Noemie Elhadad, and Amélie Marian. 2009. Beyond the stars: improving rating predictions using review text content. In WebDB, volume 9, pages 1-6. CiteSeer. +Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. Simcse: Simple contrastive learning of sentence embeddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6894-6910. +Ruidan He, Wee Sun Lee, Hwee Tou Ng, and Daniel Dahlmeier. 2018. Exploiting document knowledge for aspect-level sentiment classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 579-585. +Ruidan He, Wee Sun Lee, Hwee Tou Ng, and Daniel Dahlmeier. 2019. An interactive multi-task learning network for end-to-end aspect-based sentiment analysis. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 504-515. +Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. *Neural computation*, 9(8):1735-1780. +Xiaochen Hou, Peng Qi, Guangtao Wang, Rex Ying, Jing Huang, Xiaodong He, and Bowen Zhou. 2021. Graph ensemble learning over multiple dependency trees for aspect-level sentiment classification. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2884-2894. +Henry Hsu and Peter A Lachenbruch. 2014. Paired t test. Wiley StatsRef: statistics reference online. +Jiaxin Huang, Yu Meng, Fang Guo, Heng Ji, and Jiawei Han. 2020. Weakly-supervised aspect-based sentiment analysis via joint aspect-sentiment topic embedding. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6989-6999. +Yann LeCun, Yoshua Bengio, et al. 1995. Convolutional networks for images, speech, and time series. The handbook of brain theory and neural networks, 3361(10):1995. + +Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871-7880. +Kun Li, Chengbo Chen, Xiaojun Quan, Qing Ling, and Yan Song. 2020a. Conditional augmentation for aspect term extraction via masked sequence-to-sequence generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7056-7066. +Xin Li, Lidong Bing, Piji Li, Wai Lam, and Zhimou Yang. 2018. Aspect term extraction with history attention and selective transformation. In Proceedings of the 27th International Joint Conference on Artificial Intelligence, pages 4194-4200. +Xin Li and Wai Lam. 2017. Deep multi-task learning for aspect term extraction with memory interaction. In Proceedings of the 2017 conference on empirical methods in natural language processing, pages 2886-2892. +Yuncong Li, Cunxiang Yin, Sheng-hua Zhong, and Xu Pan. 2020b. Multi-instance multi-label learning networks for aspect-category sentiment analysis. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3550-3560. +Bin Liang, Hang Su, Rongdi Yin, Lin Gui, Min Yang, Qin Zhao, Xiaoqi Yu, and Ruifeng Xu. 2021. Beta distribution guided aspect-aware graph for aspect category sentiment analysis with affective knowledge. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 208-218. +Yunlong Liang, Fandong Meng, Jinchao Zhang, Jinan Xu, Yufeng Chen, and Jie Zhou. 2019. A novel aspect-guided deep transition model for aspect based sentiment analysis. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5569-5580. +Bing Liu. 2012. Sentiment analysis and opinion mining. Synthesis lectures on human language technologies, 5(1):1-167. +Jian Liu, Zhiyang Teng, Leyang Cui, Hanmeng Liu, and Yue Zhang. 2021. Solving aspect category sentiment analysis as a text generation task. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 4406-4416. +Jiangming Liu and Yue Zhang. 2017. Attention modeling for targeted sentiment. In Proceedings of the + +15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 572-577. +Stuart Lloyd. 1982. Least squares quantization in pmc. IEEE transactions on information theory, 28(2):129-137. +Ling Luo, Xiang Ao, Yan Song, Jinyao Li, Xiaopeng Yang, Qing He, and Dong Yu. 2019. Unsupervised neural aspect extraction with sememes. In Proceedings of the 28th International Joint Conference on Artificial Intelligence, pages 5123-5129. +Dehong Ma, Sujian Li, Fangzhao Wu, Xing Xie, and Houfeng Wang. 2019. Exploring sequence-to-sequence learning in aspect term extraction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3538-3547. +Dehong Ma, Sujian Li, Xiaodong Zhang, and Houfeng Wang. 2017. Interactive attention networks for aspect-level sentiment classification. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, pages 4068-4074. +Yukun Ma, Haiyun Peng, and Erik Cambria. 2018. Targeted aspect-based sentiment analysis via embedding commonsense knowledge into an attentive LSTM. In Proceedings of the AAAI conference on artificial intelligence, volume 32. +Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, Ion Androutsopoulos, Suresh Manandhar, Mohammad AL-Smadi, Mahmoud Al-Ayyoub, Yanyan Zhao, Bing Qin, Orphée De Clercq, Véronique Hoste, Marianna Apidianaki, Xavier Tannier, Natalia Loukachevitch, Evgeniy Kotelnikov, Nuria Bel, Salud María Jiménez-Zafra, and Gúlşen Eryigit. 2016. SemEval-2016 task 5: Aspect based sentiment analysis. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 19–30, San Diego, California. Association for Computational Linguistics. +Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, Suresh Manandhar, and Ion Androutsopoulos. 2015. SemEval-2015 task 12: Aspect based sentiment analysis. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pages 486-495, Denver, Colorado. Association for Computational Linguistics. +Maria Pontiki, Dimitris Galanis, John Pavlopoulos, Harris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014. SemEval-2014 task 4: Aspect based sentiment analysis. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 27-35, Dublin, Ireland. Association for Computational Linguistics. +Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020. Stanza: A Python natural language processing toolkit for many human + +languages. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations. +Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprint arXiv:1908.10084. +David Sculley. 2010. Web-scale k-means clustering. In Proceedings of the 19th international conference on World wide web, pages 1177-1178. +Tian Shi, Liuqing Li, Ping Wang, and Chandan K Reddy. 2021. A simple and effective self-supervised contrastive learning framework for aspect detection. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 13815-13824. +Chi Sun, Luyao Huang, and Xipeng Qiu. 2019. Utilizing bert for aspect-based sentiment analysis via constructing auxiliary sentence. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 380-385. +Yi Tay, Luu Anh Tuan, and Siu Cheung Hui. 2018. Learning to attend via word-aspect associative fusion for aspect-based sentiment analysis. In Proceedings of the AAAI conference on artificial intelligence, volume 32. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30. +Kai Wang, Weizhou Shen, Yunyi Yang, Xiaojun Quan, and Rui Wang. 2020. Relational graph attention network for aspect-based sentiment analysis. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3229-3238. +Qianlong Wang, Zhiyuan Wen, Qin Zhao, Min Yang, and Ruifeng Xu. 2021a. Progressive self-training with discriminator for aspect term extraction. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 257-268. +Yequan Wang, Minlie Huang, Xiaoyan Zhu, and Li Zhao. 2016. Attention-based LSTM for aspect-level sentiment classification. In Proceedings of the 2016 conference on empirical methods in natural language processing, pages 606-615. +Zihan Wang, Dheeraj Mekala, and Jingbo Shang. 2021b. X-class: Text classification with extremely weak supervision. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3043-3053. + +Zhen Wu, Chengcan Ying, Fei Zhao, Zhifang Fan, Xinyu Dai, and Rui Xia. 2020a. Grid tagging scheme for aspect-oriented fine-grained opinion extraction. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, pages 2576-2585. +Zhen Wu, Fei Zhao, Xin-Yu Dai, Shujian Huang, and Jiajun Chen. 2020b. Latent opinions transfer network for target-oriented opinion words extraction. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 9298-9305. +Hu Xu, Bing Liu, Lei Shu, and S Yu Philip. 2018. Double embeddings and cnn-based sequence labeling for aspect extraction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 592-598. +Lu Xu, Yew Ken Chia, and Lidong Bing. 2021. Learning span-level interactions for aspect sentiment triplet extraction. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4755-4766. +Wei Xue and Tao Li. 2018. Aspect based sentiment analysis with gated convolutional networks. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2514-2523. +Hang Yan, Junqi Dai, Tuo Ji, Xipeng Qiu, and Zheng Zhang. 2021. A unified generative framework for aspect-based sentiment analysis. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2416-2429. +Wenxuan Zhang, Yang Deng, Xin Li, Yifei Yuan, Li-dong Bing, and Wai Lam. 2021. Aspect sentiment quad prediction as paraphrase generation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 9209-9219. +He Zhao, Longtao Huang, Rong Zhang, Quan Lu, and Hui Xue. 2020. Spanplt: A span-based multi-task learning framework for pair-wise aspect and opinion terms extraction. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3239-3248. \ No newline at end of file diff --git a/axmabsaaframeworkforextremelyweaklysupervisedmultilabelaspectbasedsentimentanalysis/images.zip b/axmabsaaframeworkforextremelyweaklysupervisedmultilabelaspectbasedsentimentanalysis/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..4cf2ef329d3f4c8c2ed6dc1c26bd71de46a18d40 --- /dev/null +++ b/axmabsaaframeworkforextremelyweaklysupervisedmultilabelaspectbasedsentimentanalysis/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4e49f29855fac069bad039fc98946f1c1b24b9551324e75c7913fbcaf77602b9 +size 322876 diff --git a/axmabsaaframeworkforextremelyweaklysupervisedmultilabelaspectbasedsentimentanalysis/layout.json b/axmabsaaframeworkforextremelyweaklysupervisedmultilabelaspectbasedsentimentanalysis/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..98f0d188a590d2cdfbd7d243b928b8574f95ccd8 --- /dev/null +++ b/axmabsaaframeworkforextremelyweaklysupervisedmultilabelaspectbasedsentimentanalysis/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0a9e74e63364fbeba37bc2ef74058ff325790d9bd4ade1c23c4286a0e22527e4 +size 349684 diff --git a/iknowwhoyouarecharacterbasedfeaturesforconversationalhumorrecognitioninchinese/236aa7e5-e97e-43a0-bf5f-bdc88dd4fc0b_content_list.json b/iknowwhoyouarecharacterbasedfeaturesforconversationalhumorrecognitioninchinese/236aa7e5-e97e-43a0-bf5f-bdc88dd4fc0b_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..0f22cf4a35e51827cf4fdb57ee9e80d749802772 --- /dev/null +++ b/iknowwhoyouarecharacterbasedfeaturesforconversationalhumorrecognitioninchinese/236aa7e5-e97e-43a0-bf5f-bdc88dd4fc0b_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:47831460ee56ba3263643609e58cdd8ba22f91fb55c8efb0616b72ea48f652c5 +size 42377 diff --git a/iknowwhoyouarecharacterbasedfeaturesforconversationalhumorrecognitioninchinese/236aa7e5-e97e-43a0-bf5f-bdc88dd4fc0b_model.json b/iknowwhoyouarecharacterbasedfeaturesforconversationalhumorrecognitioninchinese/236aa7e5-e97e-43a0-bf5f-bdc88dd4fc0b_model.json new file mode 100644 index 0000000000000000000000000000000000000000..bdb6d34174c11375393018381a2f4268238ba7fc --- /dev/null +++ b/iknowwhoyouarecharacterbasedfeaturesforconversationalhumorrecognitioninchinese/236aa7e5-e97e-43a0-bf5f-bdc88dd4fc0b_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:24c4da42a863f5b506990386ac192f72645a5603ccd19df2daf7b3e151085756 +size 51866 diff --git a/iknowwhoyouarecharacterbasedfeaturesforconversationalhumorrecognitioninchinese/236aa7e5-e97e-43a0-bf5f-bdc88dd4fc0b_origin.pdf b/iknowwhoyouarecharacterbasedfeaturesforconversationalhumorrecognitioninchinese/236aa7e5-e97e-43a0-bf5f-bdc88dd4fc0b_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..81ee497a7eaf5b04b8f403bcf35d530674e7ca87 --- /dev/null +++ b/iknowwhoyouarecharacterbasedfeaturesforconversationalhumorrecognitioninchinese/236aa7e5-e97e-43a0-bf5f-bdc88dd4fc0b_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d2cba79558fca8050381f278868b265135ba53fcd77bcd470a9d07f1b400258c +size 488861 diff --git a/iknowwhoyouarecharacterbasedfeaturesforconversationalhumorrecognitioninchinese/full.md b/iknowwhoyouarecharacterbasedfeaturesforconversationalhumorrecognitioninchinese/full.md new file mode 100644 index 0000000000000000000000000000000000000000..abbe1b5727428d7e99d86264257c9ec426111f9a --- /dev/null +++ b/iknowwhoyouarecharacterbasedfeaturesforconversationalhumorrecognitioninchinese/full.md @@ -0,0 +1,187 @@ +# "I Know Who You Are": Character-Based Features for Conversational Humor Recognition in Chinese + +Wenbo Shang $^{1,*}$ , Jiangjiang Zhao $^{2,*}$ , Zezhong Wang $^{3,4}$ , Binyang Li $^{1}$ , Fangchun Yang $^{2}$ , Kam-fai Wong $^{3,4}$ + +1University of International Relations, China + +$^{2}$ Beijing University of Posts and Telecommunications, China + +3The Chinese University of Hong Kong, Hong Kong, China + +4 MoE Key Laboratory of High Confidence Software Technologies, China + +$^{1}\{\mathrm{wbshang,byli}\} @\mathrm{uir.edu.cn}$ ${}^{2}\{\mathrm{zjjbupt,fcyang}\} @\mathrm{bupt.edu.cn}$ ${}^{3}\{\mathrm{zzwang,kfwong}\} @\mathrm{se.cuhk.edu.hk}$ + +# Abstract + +Humor plays an important role in our daily life, as it is an essential and fascinating element in the communication between persons. Therefore, how to recognize punchlines from the dialogue, i.e. conversational humor recognition, has attracted much interest of computational linguistics communities. However, most existing work attempted to understand the conversational humor by analyzing the contextual information of the dialogue, but neglected the character of the interlocutor, such as age, gender, occupation, and so on. For instance, the same utterance could bring out humorous from a serious person, but may be a plain expression from a naive person. To this end, this paper proposes a Character Fusion Conversational Humor Recognition model (CFCHR) to explore character information to recognize conversational humor. CFCHR utilizes a multitask learning framework that unifies two highly pertinent tasks, i.e., character extraction and punchline identification. Based on deep neural networks, we trained both tasks jointly by sharing weight to extract the common and task-invariant features while each task could still learn its task-specific features. Experiments were conducted on Chinese sitcoms corpus, which consisted of 12,677 utterances from 22 characters. The experimental results demonstrated that CFCHR could achieve $33.08\%$ improvements in terms of F1-score over some strong baselines, and proved the effectiveness of the character information to identify the punchlines. + +# 1 Introduction + +Humor recognition is an important task of humor computation, which can not only enable machines to recognize humor, but also lay an important foundation for humor generation. According to the form of humorous text, humor recognition can be + +
One-liners humor
I failed math so many times at school, I can't even count.
Conversational humor
CharacterUtterance
志国(Zhiguo) middle age, male唉,你们不能剩我一人儿啊。 Oh, you can't leave me alone.
圆圆(Yuanyuan) child, female不剩您一人,还有小芳大妈 You are not alone. Aunt Xiaofang is with you.
志国(Zhiguo)那还不如剩我一人儿呢 I'd rather be left alone.
傅明(Fuming) old man, male都是你自己惹的麻烦, 不剩你剩谁 It's all your own troubles, who else is left without you?
+ +Table 1: Examples of one-liners humor and conversation humor. The sentence in bold is a punchline, while the rest is set-up. + +generally divided into two types: one-liners humor recognition and conversational humor recognition (CHR). As shown in Table 1, one-liners humor focusing on one single sentence or passage, e.g. joke, while conversational humor is brought out based on the dialogue, which can be widely applied to a variety of scenarios, including chat robots, machine translation, etc. So, to recognize or even understand conversational humor is very significant for both academical and industrial fields (Lin et al., 2016). + +The objective of this paper is to recognize humor from dialogue. Generally speaking, a dialogue is formed by a serial of utterances, and the utterances can be divided into set-up and punchlines. The punchline is the part of the sentences taking the role of laughing, while the rest of utterances are set-ups (Taylor and Mazlack, 2005; Attardo and Raskin, 1991). The conversational humor recognition can be considered as identifying whether an utterance + +is a punchline, such as "I'd rather be left alone" as shown in Table 1. + +Different from one-liners humor, the character is one of the most important influencing factors in conversational humor according to the theory of General Verbal Theory of Humor (1991). Note that the character refers to the sitcom role in this paper. However, existing studies for conversational humor recognition mainly considered it as a punchline recognition task. Most methods did not distinguish conversational humor from one-liners humor, and focused on the context representation to capture the humorous semantics. To the contrary, character information in the dialogue was neglected, such as, speaking style, gender, age, etc., and these information has some specific features, that are inherently funnier, and more likely to make people laugh. As a result, existing methods performed sub-optimal in conversational humor recognition. + +To solve the above issues, this paper attempts to explore character information to facilitate the machines to understand humor in the dialogue. A Character Fusion Conversational Humor Recognition (CFCHR) model is presented by integrating the character information and the contextual information to represent the conversational humorous information. + +To capture the contextual information, an utterance embedding is derived from each utterance; to capture the character information, character features and the attributes are learned from utterances and predefined attributes respectively. Then, the utterance embedding and the character embedding are fed into the a multi-task learning framework (Hastie et al., 2009). In this way, CFCHR can capture both of the contextual information and the character information for character extraction and punchline recognition. + +We conducted experiments on Chinese sitcoms corpus, which consisted of 12,677 utterances from 22 characters. CFCHR was effective in character extraction task. Compared with some strong baselines, CFCHR could achieve the best performance of punchline recognition, i.e. $51.5\%$ , in terms of F1-score. Moreover, $33.08\%$ improvements were achieved in terms of F1-score over the baseline model, and it proved the effectiveness of the character information in identifying the punchlines. + +# 2 Related Work + +Humor recognition is usually formulated as a binary classification problem. Most of the existing work on humor recognition mainly focuses on one-liners, and a classifier is usually trained based on the whole texts to predict whether it is humorous or not (Cattle and Ma, 2018; Liu et al., 2018a; Xie et al., 2021; Zhou et al., 2020; Zou and Lu, 2019; Liu et al., 2018b). + +In recent years, much research work has a growing interest in conversational humor recognition. Early work on conversational humor recognition attempted to use LSTM, RNN, GRU models (Bertero and Fung, 2016; Ramakrishna et al., 2018) to predict punchlines in dialogues. Due to the lack of corpus for conversational humor recognition, there were some studies (Pamulapati and Mamidi, 2021; Blinov et al., 2019; Pamulapati et al., 2020) later focused on constructing humorous dialogue corpus based on sitcoms. However, none of the above work accounted for the character information, which was a crucial source of information for conversational humor recognition. This paper will explore the character information and integrate multi-faceted information of character into the humorous semantics representation. + +# 3 Task Formulation + +Given a dialogue as the input text, and it is consisted of a sequence of $M$ utterances. For an utterance $s_i$ , it is consisted of $N$ words, and denoted by $s_i = \{w_{i,1}, w_{i,2}, \ldots, w_{i,j}, \ldots, w_{i,N}\}$ , where $w_{i,j}$ is the $j^{th}$ word in the utterance $s_i$ . Note that each utterance in the dialogue corresponds to an interlocutor with the character features denoted as $Role^V$ and the character attributes denoted as $Role^D$ . We detail both of these in Section 4.1. The objective of this paper is to identify if an utterance is a punchline by exploring the character information. + +# 4 CFCHR + +In this paper, we propose Character Fusion Conversational Humor Recognition (CFCHR) for punchline recognition from the dialogue. CFCHR consists of three sub-components: character extraction, punchline recognition, and multi-task learning framework. Character extraction module is firstly designed to obtain the character information representation; punchline recognition module is then followed to obtain contextual information representation; and a multi-task learning framework + +![](images/0facd97ceb9331c54c41c56a7799359d95eee8c6f514510f82ea09de08e1f4b1.jpg) +Figure 1: The overall architecture of CFCHR. CFCHR is designed for a multi-task. Task 1 is character extraction to identify each utterance from which character. Task 2 is punchline recognition. Multi-task learning is to introduce the character information in the hidden layer in the process of recognizing punchlines. + +is finally employed for character extraction and punchline identification. + +Given a dialogue $M$ , input each utterance $s_i$ and its corresponding character information $(Role^V, Role^D)$ into the model. For each utterance, we firstly derive an utterance embedding by the contextual sentence to capture the semantics in the text. To obtain the character features $Role^V$ , CFCHR will classify the interlocutor into the predefined category of the character based on each utterance. The character information $Role^K$ can be represented by the character features $Role^V$ together with the character attributes $Role^D$ . Then, both of the utterance embedding and the character embedding are fed into the a multi-task learning framework (Hastie et al., 2009). In this way, CFCHR can capture both of the contextual semantics and the character information for character extraction and punchline recognition. + +# 4.1 Character extraction + +In the scenario of a dialogue, the character of a interlocutor is essential for creating humor, which contains rich information, including the character attributes, such as gender, etc., and the character features, such as speaking style, personality, etc., which are learned from utterances. Therefore, we represent the character information $\text{Role}^K$ by two parts, character features $\text{Role}^V$ and character attributes $\text{Role}^D$ . Hence, we propose to apply utterance embedding and character attributes constructed by human to derive character information + +representations. In CFCHR, BERT (Devlin et al., 2018) is utilized to derive utterance embedding without loss of generality. + +For each utterance $s_i$ , we can obtain the corresponding utterance embedding through the pretrained model, BERT. BERT deploys a multi-layer bidirectional encoder based on transformers with multi-head self-attention (Vaswani et al., 2017), which contains a special token [CLS]. [CLS] can be an embedding to represent the semantics of the whole utterance. Here, the $d_H$ -dimensional utterance embedding is denoted by $T_{[CLS]}$ . + +Based on $T_{[CLS]}$ , we apply multi-layer perceptron with one single hidden layer to identify characters as follows. + +$$ +\hat {y} ^ {E} = \underset {e \in \{0, \dots , 2 1 \}} {\operatorname {a r g m a x}} \mathcal {F} (w \left(T _ {[ C L S ]}\right) + b) _ {e} \quad (1) +$$ + +$\hat{y}^E$ denotes the prediction results of character classification. $e$ denotes the index of each character class. 22 characters are predefined in total. Since many characters in the sitcom have only appeared once or twice, such as passers-by, couriers and so on, we group the above characters into one category and the remaining 21 characters are the main characters. Thereby, most features of characters like speaking style and personality can be learned in this single layer, which can be extracted from weight. Here, we use $R e l^{V}$ to denote the character features, and use $R^{d_{C}\times d_{H}}$ to denote the dimension. + +$$ +\operatorname {R o l e} ^ {V} = w, \operatorname {R o l e} ^ {V} \in \mathbb {R} ^ {d _ {C} \times d _ {H}} \tag {2} +$$ + +$\mathcal{F}(\cdot)$ is the activation function, which is set to Relu function (Hastie et al., 2009) in this paper. $d_{C}$ is the number of character classes, 22. For each $i \in d_{C}$ , $Role_{i}^{V}$ represents the i-th category of the characters. Moreover, we can further derive the joint character information $Role_{i}^{K}$ to indicate both character features and character attributes by concatenating two different vectors as follows: + +$$ +\operatorname {R o l e} _ {i} ^ {K} = \left[ \operatorname {R o l e} _ {i} ^ {V}; \operatorname {R o l e} _ {i} ^ {D} \right] \tag {3} +$$ + +$\text{Role}_i^K$ denotes the joint character information of the i-th character. + +# 4.2 Punchline recognition + +For the task of punchline recognition, it is essential to capture the semantics of the utterance, and we adopt a classic representation(Bertero and Fung, + +2016). Since character information is one of the most important factors for creating humor. Thus, we combine the character information embedding $\text{Role}_i^K$ and the utterance embedding to capture the overall semantics for punchline recognition as follows. + +$$ +T _ {[ A L L ] i} = \operatorname {R o l e} _ {i} ^ {K} \oplus T _ {[ C L S ] i} \tag {4} +$$ + +We deploy MLP (Hastie et al., 2009) to capture the overall semantics representation $T_{[ALL]}$ for recognizing punchlines as follows, . + +$$ +\hat {y} ^ {P} = \underset {p \in \{0, 1 \}} {\operatorname {a r g m a x}} \mathcal {F} \left(T _ {[ A L L ]}\right) _ {p} \tag {5} +$$ + +$\hat{y}^P$ represents the prediction results of punchline recognition. $p$ indicates whether it is the punchline. + +# 4.3 Multi-task learning + +In our work, CFCHR is designed for character extraction and punchline recognition based on the multi-task model structure (Caruana, 1997). We jointly train both tasks by sharing the weight to extract the common and task-invariant features while each task can still learn its task-specific features. The loss functions of two tasks are calculated separately, and in the same iteration, two gradients are accumulated. + +# 5 Experiment + +# 5.1 Experiment Settings + +Dataset To evaluate the performance of our proposed method, experiments were conducted based on a publicly available dataset which was collected from a famous Chinese sitcom 我爱我家(I Love My Family). In this dataset, the scripts of the sitcom was divided into several dialogues depending on the scene and the plot changing. Each dialogue consisted of several utterances, while each utterance was in accordance with a character of the sitcom. For each utterance, it was assigned a label, punchline or non - punchline. The statistics of the dataset was shown in Table 2. + +Parameter setting During the training processing, we used Adam optimizer (Kingma and Ba, 2015) with an initial learning rate of $1 \times 10^{-5}$ , and the batch size was set as 32 utterances. Besides, we also dynamically adjusted the learning rate via a linear function for each iteration of training. For other parameter settings, we followed the standard configuration. + +
Dataset of I Love My Family
Dialogues348
Utterances12,677
Avg. length of dialogues36.34
Avg. length of utterances9.99
Punchline ratio (%)28.76
+ +Table 2: The statistics of dataset. + +
ModelPRF1
CFCHRRoberta0.8330.7990.775
CFCHRBERT0.8770.8240.784
+ +Evaluation Metrics We adopted the classic metrics in NLP, precision (P), recall(R), and F1-score(F1) (Ceri et al., 2013; Powers, 2020) to assess the performance. + +# 5.2 Experimental Results + +We compared CFCHR with five baseline models, including LSTM+Attention (Lin et al., 2017), Bi-LSTM (Graves and Schmidhuber, 2005), BC-LSTM (Mousa and Schuller, 2017), BERT (Devlin et al., 2018), Roberta (Liu et al., 2019). + +Table 3 demonstrated the performance of the character extraction task. From the results, we could find that the representation of the character information could be learned by CFCHR. + +Table 4 demonstrated the performance of the punchline recognition task based on the sitcom dataset. It could be seen that after adding character knowledge, the performance of the model was significantly improved. Compared to the baseline models, $\mathrm{CFCHR}_{\mathrm{BERT}}$ achieved the best performance, i.e. $51.5\%$ in terms of F1-score, which has $33.08\%$ improvements over Bert model. $\mathrm{CFCHR}_{\mathrm{Roberta}}$ achieved $30.23\%$ improvements of + +Table 3: The results of character extraction task. + +
ModelPRF1
LSTM+attention0.4530.2580.329
Bi-LSTM--0.326
BC-LSTM--0.358
Roberta0.3370.4370.381
BERT0.3540.4280.387
CFCHRRoberta0.6130.4580.504
CFCHRBERT0.6490.4530.515
+ +Table 4: The results of punchline recognition task. CFCHRBERT denotes CFCHR based on BERT, and CFCHRRoberta denotes CFCHR based on Roberta. + +F1-score over BERT and $32.28\%$ improvements of F1-score over Roberta. It was proved that the character information was able to improve the model's ability of understanding conversational humor. Moreover, the performances in F1-score of BERT model were better than that of Roberta whether considering character information or not. + +# 6 Conclusions + +In this paper, we explore character information for conversational humor recognition, and present a multi-task learning framework for character extraction and punchline recognition. Experimental results demonstrated that character information was effective for punchline identification and could achieve the performance of $51.5\%$ in terms of F1-score. Compared with some strong baselines, our proposed model could achieve $33.08\%$ improvement on Chinese sitcom dataset. + +# 7 Limitations + +In our work, we explored the character information for punchline recognition, and pre-defined 22 characters on the specific sitcom. However, the characters' personalities are different in different sitcoms, so when transferring to new sitcoms, CFCHR needs to be retrained to learn the new characters' information. For the future work, CFCHR can be improved to learn coarse-grained common features of different clusters of characters. + +# Acknowledgements + +This project was partially supported by National Natural Science Foundation of China (Grant number: 61976066), Beijing Natural Science Foundation (Grant number: 4212031), the Fundamental Research Fund for the Central Universities (Grant numbers:3262021T23), and Research Funds for NSD Construction, University of International Relations (Grant numbers: 2021GA07), Innovation & Technology Commission HKSAR (ITF Project No. PRP-054-21FX). + +# References + +Salvatore Attardo and Victor Raskin. 1991. Script theory revis (it) ed: Joke similarity and joke representation model. +Dario Bertero and Pascale Fung. 2016. A long short-term memory framework for predicting humor in dialogues. In *NAACL HLT* 2016, The 2016 Conference + +of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego California, USA, June 12-17, 2016, pages 130-135. +Vladislav Blinov, Valeria Bolotova-Baranova, and Pavel Braslavski. 2019. Large dataset and language model fun-tuning for humor recognition. In Proceedings of the 57th annual meeting of the association for computational linguistics, pages 4027-4032. +Rich Caruana. 1997. Multitask learning. Machine learning, 28(1):41-75. +Andrew Cattle and Xiaojuan Ma. 2018. Recognizing humour using word associations and humour anchor extraction. In Proceedings of the 27th International Conference on Computational Linguistics, COLING 2018, Santa Fe, New Mexico, USA, August 20-26, 2018, pages 1849-1858. +Stefano Ceri, Alessandro Bozzon, Marco Brambilla, Emanuele Della Valle, Piero Fraternali, and Silvia Quarteroni. 2013. An introduction to information retrieval. In Web information retrieval, pages 3-11. Springer. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. +Alex Graves and Jürgen Schmidhuber. 2005. Framewise phoneme classification with bidirectional LSTM and other neural network architectures. *Neural Networks*, 18(5-6):602-610. +Trevor Hastie, Robert Tibshirani, Jerome H Friedman, and Jerome H Friedman. 2009. The elements of statistical learning: data mining, inference, and prediction, volume 2. Springer. +Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. +Hongfei Lin, Dongyu Zhang, Liang Yang, and Bo Xu. 2016. Computational humor researches and applications. Journal of Shandong University (Natural Science), 51(7):1-10. +Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. 2017. A structured self-attentive sentence embedding. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. +Lizhen Liu, Donghai Zhang, and Wei Song. 2018a. Exploiting syntactic structures for humor recognition. In Proceedings of the 27th International Conference on Computational Linguistics, COLING 2018, Santa Fe, New Mexico, USA, August 20-26, 2018, pages 1875-1883. + +Lizhen Liu, Donghai Zhang, and Wei Song. 2018b. Modeling sentiment association in discourse for humor recognition. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 586-591. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. +Amr El-Desoky Mousa and Björn W. Schuller. 2017. Contextual bidirectional long short-term memory recurrent neural networklanguage models: A generative approach to sentiment analysis. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017, Valencia, Spain, April 3-7, 2017, Volume 1: Long Papers, pages 1023-1032. +Vaishnavi Pamulapati and Radhika Mamidi. 2021. Developing conversational data and detection of conversational humor in telugu. In Proceedings of the 2nd Workshop on Computational Approaches to Discourse, pages 12-19. +Vaishnavi Pamulapati, Gayatri Purigilla, and Radhika Mamidi. 2020. A novel annotation schema for conversational humor: Capturing the cultural nuances in kanyakulkam. In Proceedings of the 14th Linguistic Annotation Workshop, pages 34-47. +David M. W. Powers. 2020. Evaluation: from precision, recall and f-measure to roc, informedness, markedness and correlation. CoRR, abs/2010.16061. +Anil Ramakrishna, Timothy Greer, David Atkins, and Shrikanth Narayanan. 2018. Computational modeling of conversational humor in psychotherapy. In Interspeech, volume 2018, page 2344. +Julia Taylor and Lawrence Mazlack. 2005. Toward computational recognition of humorous intent. In Proceedings of Cognitive Science Conference, pages 2166-2171. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30. +Yubo Xie, Junze Li, and Pearl Pu. 2021. Uncertainty and surprisal jointly deliver the punchline: Exploiting incongruity-based features for humor recognition. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 2: Short Papers), Virtual Event, August 1-6, 2021, pages 33-39. +Yichao Zhou, Jyun-Yu Jiang, Jieyu Zhao, Kai-Wei Chang, and Wei Wang. 2020. "the boating store + +had its best sail ever": Pronunciation-attentive contextualized pun recognition. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 813-822. +Yanyan Zou and Wei Lu. 2019. Joint detection and location of english puns. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 2117-2123. \ No newline at end of file diff --git a/iknowwhoyouarecharacterbasedfeaturesforconversationalhumorrecognitioninchinese/images.zip b/iknowwhoyouarecharacterbasedfeaturesforconversationalhumorrecognitioninchinese/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..4da59baadbda75f4c8eecfd709844a573897c264 --- /dev/null +++ b/iknowwhoyouarecharacterbasedfeaturesforconversationalhumorrecognitioninchinese/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ee819ae4cbea18c4607c157077e67a8722de3acf70d603a38cd39722d1f4aff7 +size 197361 diff --git a/iknowwhoyouarecharacterbasedfeaturesforconversationalhumorrecognitioninchinese/layout.json b/iknowwhoyouarecharacterbasedfeaturesforconversationalhumorrecognitioninchinese/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..9d812effa446fce24d8e85969c26dc6b45c25d9d --- /dev/null +++ b/iknowwhoyouarecharacterbasedfeaturesforconversationalhumorrecognitioninchinese/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f5174fbd2e2f7930b071b76a79fb28177b20fbfb40903228484b637261e2f3f2 +size 212364 diff --git a/m4adaptermultilingualmultidomainadaptationformachinetranslationwithametaadapter/7a8889b6-d965-4482-bc6e-34c090a0fe0b_content_list.json b/m4adaptermultilingualmultidomainadaptationformachinetranslationwithametaadapter/7a8889b6-d965-4482-bc6e-34c090a0fe0b_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..ebccb3420b56192d09994aa0ead41e99609dd2dc --- /dev/null +++ b/m4adaptermultilingualmultidomainadaptationformachinetranslationwithametaadapter/7a8889b6-d965-4482-bc6e-34c090a0fe0b_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d62f9ed390eb86b0885f8a5578c1d21de59940487215c034e8a68af7a059e6cb +size 110317 diff --git a/m4adaptermultilingualmultidomainadaptationformachinetranslationwithametaadapter/7a8889b6-d965-4482-bc6e-34c090a0fe0b_model.json b/m4adaptermultilingualmultidomainadaptationformachinetranslationwithametaadapter/7a8889b6-d965-4482-bc6e-34c090a0fe0b_model.json new file mode 100644 index 0000000000000000000000000000000000000000..0babcecd6a99edda019f214918120e54b118d699 --- /dev/null +++ b/m4adaptermultilingualmultidomainadaptationformachinetranslationwithametaadapter/7a8889b6-d965-4482-bc6e-34c090a0fe0b_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:98a576b76e4e240cdb58c2a933ccebda0032e4f3407696a39a58ea433708005c +size 132771 diff --git a/m4adaptermultilingualmultidomainadaptationformachinetranslationwithametaadapter/7a8889b6-d965-4482-bc6e-34c090a0fe0b_origin.pdf b/m4adaptermultilingualmultidomainadaptationformachinetranslationwithametaadapter/7a8889b6-d965-4482-bc6e-34c090a0fe0b_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..b980e2660dd329c9a0f9a9958ae449a44d729e57 --- /dev/null +++ b/m4adaptermultilingualmultidomainadaptationformachinetranslationwithametaadapter/7a8889b6-d965-4482-bc6e-34c090a0fe0b_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:506d0a098b39d7cd7eed2dee4de26cbec6a72652b629004a73af97d7b6718265 +size 1137163 diff --git a/m4adaptermultilingualmultidomainadaptationformachinetranslationwithametaadapter/full.md b/m4adaptermultilingualmultidomainadaptationformachinetranslationwithametaadapter/full.md new file mode 100644 index 0000000000000000000000000000000000000000..1990e8c4d8b726b733a04ce240d5c0557a3f5a61 --- /dev/null +++ b/m4adaptermultilingualmultidomainadaptationformachinetranslationwithametaadapter/full.md @@ -0,0 +1,400 @@ +# $m^4$ Adapter: Multilingual Multi-Domain Adaptation for Machine Translation with a Meta-Adapter + +Wen Lai $^{1}$ , Alexandra Chronopoulou $^{1,2}$ , Alexander Fraser $^{1,2}$ + +1Center for Information and Language Processing, LMU Munich, Germany + +2Munich Center for Machine Learning, Germany + +{lavine,achron,fraser}@cis.lmu.de + +# Abstract + +Multilingual neural machine translation models (MNMT) yield state-of-the-art performance when evaluated on data from a domain and language pair seen at training time. However, when a MNMT model is used to translate under domain shift or to a new language pair, performance drops dramatically. We consider a very challenging scenario: adapting the MNMT model both to a new domain and to a new language pair at the same time. In this paper, we propose $m^4$ Adapter (Multilingual Multi-Domain Adaptation for Machine Translation with a Meta-Adapter), which combines domain and language knowledge using meta-learning with adapters. We present results showing that our approach is a parameter-efficient solution which effectively adapts a model to both a new language pair and a new domain, while outperforming other adapter methods. An ablation study also shows that our approach more effectively transfers domain knowledge across different languages and language information across different domains. + +# 1 Introduction + +Multilingual neural machine translation (MNMT; Johnson et al., 2017; Aharoni et al., 2019; Fan et al., 2021), uses a single model to handle translation between multiple language pairs. There are two reasons why MNMT is appealing: first, it has been proved to be effective on transferring knowledge from high-resource languages to low-resource languages, especially in zero-shot scenarios (Gu et al., 2019; Zhang et al., 2020); second, it significantly reduces training and inference cost, as it requires training only a single multilingual model, instead of a separate model for each language pair. + +Adapting MNMT models to multiple domains is still a challenging task, particularly when domains are distant to the domain of the training cor + +pus. One approach to address this is fine-tuning the model on out-of-domain data for NMT (Freitag and Al-Onaizan, 2016; Dakwale and Monz, 2017). Another approach is to use lightweight, learnable units inserted between transformer layers, which are called adapters (Bapna and First, 2019) for each new domain. Similarly, there is research work on adapting MNMT models to a new language pair using fine-tuning (Neubig and Hu, 2018) and adapters (Bapna and First, 2019; Philip et al., 2020; Cooper Stickland et al., 2021b). + +Although effective, the above approaches have some limitations: i) Fine-tuning methods require updating the parameters of the whole model for each new domain, which is costly; ii) when fine-tuning on a new domain, catastrophic forgetting (McCloskey and Cohen, 1989) reduces the performance on all other domains, and proves to be a significant issue when data resources are limited. iii) adapter-based approaches require training domain adapters for each domain and language adapters for all languages, which also becomes parameterinefficient when adapting to a new domain and a new language because the parameters scale linearly with the number of domains and languages. + +In recent work, Cooper Stickland et al. (2021a) compose language adapters and domain adapters in MNMT and explore to what extent domain knowledge can be transferred across languages. They find that it is hard to decouple language knowledge from domain knowledge and that adapters often cause the 'off-target' problem (i.e., translating into a wrong target language (Zhang et al., 2020)) when new domains and new language pairs are combined together. They address this problem by using additional in-domain monolingual data to generate synthetic data (i.e., back-translation; Sennrich et al., 2016) and randomly dropping some domain adapter layers (AdapterDrop; Rücklé et al., 2021). + +Motivated by Cooper Stickland et al. (2021a), + +we consider a challenging scenario: adapting a MNMT model to multiple new domains and new language directions simultaneously in low-resource settings without using extra monolingual data for back-translation. This scenario could arise when one tries to translate a domain-specific corpus with a commercial translation system. Using our approach, we adapt a model to a new domain and a new language pair using just 500 domain- and language-specific sentences. + +To this end, we propose $m^4$ Adapter (Multilingual Multi-Domain Adaptation for Machine Translation with Meta-Adapter), which facilitates the transfer between different domains and languages using meta-learning (Finn et al., 2017) with adapters. Our hypothesis is that we can formulate the task, which is to adapt to new languages and domains, as a multi-task learning problem (and denote it as $D_i - L_1 - L_2$ , which stands for translating from a language $L_1$ to a language $L_2$ in a specific domain $D_i$ ). Our approach is two-step: initially, we perform meta-learning with adapters to efficiently learn parameters in a shared representation space across multiple tasks using a small amount of training data (5000 samples); we refer to this as the meta-training step. Then, we fine-tune the trained model to a new domain and language pair simultaneously using an even smaller dataset (500 samples); we refer to this as the meta-adaptation step. + +In this work, we make the following contributions: i) We present $m^4$ Adapter, a meta-learning approach with adapters that can easily adapt to new domains and languages using a single MNMT model. Experimental results show that $m^4$ Adapter outperforms strong baselines. ii) Through an ablation study, we show that using $m^4$ Adapter, domain knowledge can be transferred across languages and language knowledge can also be transferred across domains without using target-language monolingual data for back-translation (unlike the work of Cooper Stickland et al., 2021a). iii) To the best of our knowledge, this paper is the first work to explore meta-learning for MNMT adaptation. + +# 2 Related Work + +Domain Adaptation in NMT. Existing work on domain adaptation for machine translation can be categorized into two types: data-centric and model-centric approaches (Chu and Wang, 2018). The + +former focus on maximizing the use of in-domain monolingual, synthetic, and parallel data (Domhan and Hieber, 2017; Park et al., 2017; van der Wees et al., 2017), while the latter design specific training objectives, model architectures or decoding algorithms for domain adaptation (Khayrallah et al., 2017; Gu et al., 2019; Park et al., 2022). In the case of MNMT, adapting to new domains is more challenging because it needs to take into account transfer between languages (Chu and Dabre, 2019; Cooper Stickland et al., 2021a). + +Meta-Learning for NMT. Meta-learning (Finn et al., 2017), which aims to learn a generally useful model by training on a distribution of tasks, is highly effective for fast adaptation and has recently been shown to be beneficial for many NLP tasks (Lee et al., 2022). Gu et al. (2018) first introduce a model-agnostic meta-learning algorithm (MAML; Finn et al., 2017) for low-resource machine translation. Sharaf et al. (2020), Zhan et al. (2021) and Lai et al. (2022) formulate domain adaptation for NMT as a meta-learning task, and show effective performance on adapting to new domains. Our approach leverages meta-learning to adapt a MNMT model to a new domain and to a new language pair at the same time. + +Adapters for NMT. Bapna and First (2019) train language-pair adapters on top of a pre-trained generic MNMT model, in order to recover lost performance on high-resource language pairs compared to bilingual NMT models. Philip et al. (2020) train adapters for each language and show that adding them to a trained model improves the performance of zero-shot translation. Chronopoulou et al. (2022) train adapters for each language family and show promising results on multilingual machine translation. Cooper Stickland et al. (2021b) train language-agnostic adapters to efficiently fine-tune a pre-trained model for many language pairs. More recently, Cooper Stickland et al. (2021a) stack language adapters and domain adapters on top of an MNMT model and they conclude that it is not possible to transfer domain knowledge across languages, except by employing back-translation which requires significant in-domain resources. In this work, we introduce adapters into the meta-learning algorithm and show that this approach permits transfer between domains and languages. + +Our work is mostly related to Cooper Stickland et al. (2021a), however we note several differences: i) we study a more realistic scenario: the corpus + +of each domain and language pair is low-resource (i.e., the meta-training corpus in each domain for each language pair is limited to 5000 sentences and the fine-tuning corpus to 500 sentences), which is easier to obtain; ii) our approach can simultaneously adapt to new domains and new language pairs without using back-translation. iii) we also show that $m^4$ Adapter can transfer domain information across different languages and language knowledge across different domains through a detailed ablation analysis. + +# 3 Method + +Our goal is to efficiently adapt an MNMT model to new domains and languages. We propose a novel approach, $m^4$ Adapter, which formulates the multilingual multi-domain adaptation task as a multitask learning problem. To address it, we propose a 2-step approach, which combines meta-learning and meta-adaptation with adapters. Our approach permits sharing parameters across different tasks. The two steps are explained in Subsections 3.1 and 3.2. + +# 3.1 Meta-Training + +The goal of meta-learning is to obtain a model that can easily adapt to new tasks. To this end, we metatrain adapters in order to find a good initialization of our model's parameters using a small training dataset of source tasks $\{\mathcal{T}_1,\dots ,\mathcal{T}_t\}$ + +We first select $m$ tasks, as we describe in § 3.1.1. Then, for each of the $m$ sampled tasks, we sample $n$ examples. We explain the task sampling strategy in § 3.1.2. This way, we set up the $m$ -way- $n$ -shot task. After setting up the task, we use a meta-learning algorithm, which we describe in § 3.1.3, to metalearn the parameters of the adapter layers. The architecture of the adapters and their optimization objective are presented in § 3.1.4. Algorithm 1 details the meta-training process of our approach. + +# 3.1.1 Task Definition + +Motivated by the work of Tarunesh et al. (2021), where a multilingual multi-task NLP task is regarded as a Task-Language pair (TLP), we address multilingual multi-domain translation as a multitask learning problem. Specifically, a translation task in a specific textual domain corresponds to a Domain-Language-Pair (DLP). For example, an English-Serbian translation task in the 'Ubuntu' domain is denoted as a DLP 'Ubuntu-en-sr'. Given $d$ domains and $l$ languages, we have $d \cdot l \cdot (l - 1)$ + +tasks of this form. We denote the proportion of the dataset size of all DLPs for the $i^{th}$ DLP as $s_i = |\mathcal{D}_{train}^i| / \left(\sum_{a=1}^n |\mathcal{D}_{train}^a|\right)$ , where $s_i$ will be used in temperature-based sampling (see more details in § 3.1.2). The probability of sampling a batch from the $i^{th}$ DLP during meta-training is denoted as $P_{\mathcal{D}}(i)$ . The distribution over all DLPs, is a multinomial (which we denote as $\mathcal{M}$ ) over $P_{\mathcal{D}}(i)$ : $\mathcal{M} \sim P_{\mathcal{D}}(i)$ . + +# 3.1.2 Task Sampling + +Given $d$ domains and $l$ languages, we sample some DLPs per batch among all $d \cdot l \cdot (l - 1)$ tasks. We consider a standard $m$ -way- $n$ -shot meta-learning scenario: assuming access to $d \cdot l \cdot (l - 1)$ DLPs, a $m$ -way- $n$ -shot task is created by first sampling $m$ DLPs ( $m \ll l \cdot (l - 1)$ ); then, for each of the $m$ sampled DLPs, $(n + q)$ examples of each DLP are selected; the $n$ examples for each DLP serve as the support set to update the parameter of pre-trained model, while $q$ examples constitute the query set to evaluate the model. + +Task sampling is an essential step for meta-learning. Traditional meta-learning methods sample the tasks uniformly (Sharaf et al., 2020), through ordered curriculum (Zhan et al., 2021), or dynamically adjust the sampled dataset according to the model parameters (parameterized sampling strategy, Tarunesh et al., 2021). We do not employ these strategies for the following reasons: i) sampling uniformly is simple but does not consider the distribution of the unbalanced data; ii) Although effective, curriculum-based and parameterized sampling consider features of all $d \cdot l \cdot (l - 1)$ DLPs. Because of this, the amount of DLPs is growing exponentially with the number of languages and domains. In contrast, we follow a temperature-based heuristic sampling strategy (Aharoni et al., 2019), which defines the probability of any dataset as a function of its size. Specifically, given $s_i$ as the percentage of the $i^{th}$ DLP in all DLPs, we compute the following probability of the $i^{th}$ DLP to be sampled: + +$$ +P _ {\mathcal {D}} (i) = s _ {i} ^ {1 / \tau} / \left(\sum_ {a = 1} ^ {n} s _ {a} ^ {1 / \tau}\right) +$$ + +where $\tau$ is a temperature parameter. $\tau = 1$ means that each DLP is sampled in proportion to the size of the corresponding dataset. $\tau \to \infty$ refers to sampling DLPs uniformly. + +![](images/26d1d1e4b6d43369bcf753eb23e28d92084ad2ccdd9f1bf78a2b347ae9894aa4.jpg) +Figure 1: $m^4$ Adapter architecture. + +# 3.1.3 Meta-Learning Algorithm + +Given $\theta$ as the parameters of the pre-trained model, $\psi$ as the parameters of the adapters, MAML aims to minimize the following objective: + +$$ +\min _ {\psi} \sum_ {\mathcal {T} _ {i} \sim \mathcal {M}} \mathcal {L} _ {i} \left(U _ {i} ^ {k} (\theta , \psi)\right) +$$ + +where $\mathcal{M}$ is the multinomial distribution over DLPs, $\mathcal{L}_i$ is the loss function and $U_i^k$ is a function which keeps $\theta$ frozen and only returns $\psi$ after $k$ gradient updates calculated on batches sampled from $\mathcal{T}_i$ . Note that, to minimize this goal, the traditional MAML algorithm requires computing gradients of the form $\frac{\partial}{\partial\psi} U_i^k (\psi)$ , which leads to the costly computation of second-order derivatives. To this end, we follow Reptile (Nichol et al., 2018), an alternative first-order meta-learning algorithm that uses a simple update rule: + +$$ +\psi \gets \psi + \beta \frac {1}{| \{\mathcal {T} _ {i} \} |} \sum_ {\mathcal {T} _ {i} \sim \mathcal {M}} (\psi_ {i} ^ {(k)} - \psi) +$$ + +where $\psi_i^{(k)}$ is $U_{i}^{k}(\theta ,\psi)$ and $\beta$ is a hyper-parameter. Despite its simplicity, it was recently shown that Reptile is at least as effective as MAML in terms of performance (Dou et al., 2019). We therefore employ Reptile for meta-learning in our experiments. + +# 3.1.4 Meta-Adapter + +Adapters (Swietojanski and Renals, 2014; Vilar, 2018; Houlsby et al., 2019) are lightweight feedforward modules. They are described by the following Equation: $W_{\mathrm{up}}f \left( W_{\mathrm{down}}\mathrm{LN}(\mathbf{h}) \right) + \mathbf{h}$ . An adapter consists of a layer normalization $\mathrm{LN}(\cdot)$ (Ba et al., 2016) of the input $\mathbf{h}$ , which is passed to a down-projection $W_{\mathrm{down}} \in R^{z\times d}$ , a non-linear activation $f(\cdot)$ (in our case, ReLU) and an up-projection $W_{\mathrm{up}} \in R^{d\times z}$ , where $d$ is the bottleneck dimension of the adapter module and the only tunable hyperparameter. The output is combined with a residual connection. Adapters are added between sub-layers of a pre-trained Transformer (Vaswani et al., 2017) model (see the right part of Figure 1), usually after the feed-forward layer. + +Using adapters is appealing for multiple reasons: i) we only update the adapter parameters $\psi$ during the whole fine-tuning process, which makes training faster especially for large pre-trained models; ii) they obtain a performance comparable to that of traditional fine-tuning. However, as the adapter parameters $\psi$ are randomly initialized they may not perform well in the few-shot setting. Moreover, adding a new set of adapters for each domain or language pair (Bapna and First, 2019; Cooper Stickland et al., 2021a) quickly becomes inefficient when we need to adapt to many new domains and language pairs. To address this problem, we propose training a Meta-Adapter, which inserts adapter layers into the meta-learning training process (see the left part of Figure 1). Different from the traditional adapter training process, we only need to train a single meta-adapter to adapt to all new language pairs and domains. + +Let $\theta$ denote the parameters of the pre-trained model and $\psi$ the parameters of the adapter. Given a target task $\mathcal{T}$ in the domain $\mathcal{D}_{\mathcal{T}}$ and a loss function $\mathcal{L}_{\mathcal{T}}(\cdot)$ , we train a meta-adapter to minimize the following objective through gradient descent: + +$$ +\min _ {\psi} \mathcal {L} _ {\mathcal {T}} (\theta , \psi ; \mathcal {D} _ {\mathcal {T}}) +$$ + +where the parameters of pre-trained model $\theta$ are frozen and the adapter parameters $\psi$ are randomly initialized, leading to a size of $\psi \ll \theta$ . This makes our approach more efficient than meta-learning an entire model (see more details in Section 6.1). + +# 3.2 Meta-Adaptation + +After the meta-training phase, the parameters of the adapter are fine-tuned to adapt to new tasks (as + +Algorithm 1 $m^4$ Adapter (Multilingual Multi-Domain Adaptation with Meta-Adapter) + +Input: $\mathcal{D}_{train}$ set of DLPs for meta training; Pre-trained MNMT model $\theta$ + +1: Initialize $P_D(i)$ based on temperature sampling +2: while not converged do +3: $\triangleright$ Perform Reptile Updates +4: Sample $m$ DLPs $\mathcal{T}_1, \mathcal{T}_2, \ldots, \mathcal{T}_m$ from $\mathcal{M}$ +5: for $\mathrm{i} = 1,2,\dots ,\mathrm{m}$ do +6: $\psi_i^{(k)}\gets U_i^k (\theta ,\psi)$ , denoting $k$ gradient +7: updates from $\psi$ on batches of DLP $\mathcal{T}_i$ +8: while keeping $\theta$ frozen +9: end for +10: $\psi \gets \psi +\frac{\beta}{m}\sum_{i = 1}^{m}(\psi_i^{(k)} - \psi)$ +11: end while +12: return Meta-Adapter parameter $\psi$ + +both the domain and language pair of interest are not seen during the meta-training stage) using a small amount of data to simulate a low-resource scenario. + +We find that this step is essential to our approach, as it permits adapting the parameters of the metalearned model to the domain and language pair of interest. This step uses a very small amount of data (500 samples), which we believe could realistically be available for each DLP. + +# 4 Experiments + +Datasets. We split the datasets in two groups: meta-training or training dataset (used in step 1, § 3.1) and meta-adapting or adapting dataset (used in step 2, § 3.2). We first meta-learn the adapters on the training dataset (that contains DLPs different to the ones we will evaluate on), then fine-tune to new domains and language pairs on the adapting dataset (a small dataset of the DLPs we will evaluate on). We list the datasets used, each treated as a different domain: EUbookshop, KDE, OpenSubtitles, QED, TED, Ubuntu, Bible, UN, Tanzil, Infopankki. The datasets cover the following languages (ISO 639-1 language code): en, de, fr, mk, sr, et, hr, hu, fi, uk, is, lt, ar, es, ru, zh and are publicly available on OPUS $^4$ (Tiedemann, 2012). + +Data Preprocessing. For each training dataset, we strictly limit the corpus of each DLP to a maxi + +mum of 5000 sentences to simulate a low-resource setting. For each adapting dataset, we use 500 sentences in each DLP to fine-tune the MNMT model, simulating a few-shot setting. For the validation and test set, we select 500 sentences and avoid overlap with the adapting dataset by de-duplication. We filter out sentences longer than 175 tokens and preprocess all data using sentencepiece $^{5}$ (Kudo and Richardson, 2018). More details for the data used in this paper can be found in the Appendix A.1. + +Baselines. We compare $m^4$ Adapter with the following baselines: i) $m2m$ : Using the original $m2m$ model (Fan et al., 2021) to generate the translations. ii) $m2m + FT$ : Fine-tuning $m2m$ on all DLPs. iii) $m2m + tag$ : Fine-tuning $m2m$ with domain tags (Kobus et al., 2017) on all DLPs. iv) agnostic-adapter: Mixing the data from all DLPs to train the adapters (Cooper Stickland et al., 2021b), to obtain language and domain-agnostic adapters. v) stack-adapter: Training two adapters for each language pair and domain, then stacking both adapters (Cooper Stickland et al., 2021a). Taking 'Ubuntu-en-sr' as an example, this approach first trains a language pair adapter for 'en-sr' using all data containing 'en-sr' in all domains (also including the 'Ubuntu' domain) and a domain adapter for 'Ubuntu' using all data covering all language pairs in the 'Ubuntu' domain. Then, the two adapters are stacked together. vi) meta-learning: Traditional meta-learning methods using the MAML algorithm (Sharaf et al., 2020) on all DLPs. + +Implementation. We use $\mathrm{m}2\mathrm{m}$ , released in the HuggingFace repository6 (Wolf et al., 2020). For adapter training, we use the implementation of the AdapterHub repository7 (Pfeiffer et al., 2020). We use DeepSpeed8 (Rasley et al., 2020) to accelerate the pre-training of big models. Note that all baseline systems except stack-adapter train a single MNMT model or a single adapter on all DLPs in the training datasets and then fine-tune to a specific DLP on a single adapting dataset. For stack-adapter, the number of language pair adapters and domain adapters to be trained is proportional to the number of language pairs and the number of domains (see more details in Appendix A.2). + +Evaluation. We measure case-sensitive detokenized BLEU with SacreBLEU $^9$ (Post, 2018). For + +
BLEUspecific domain
TEDUbuntuKDE
m2m18.1816.2020.6122.04
m2m + FT20.8417.5328.8129.19
m2m + tag22.7018.7031.8631.53
agnostic-adapter23.7019.8231.0732.74
stack-adapter21.0618.3429.1730.26
meta-learning20.0117.5728.1128.59
m4Adapter23.8919.7731.4632.91
+ +Table 1: Performance on the meta-training stage (DLPs of training dataset): average BLEU on DLPs over all domains (left); average BLEU on DLPs per domain (right, under specific domain). + +Chinese we use the SacreBLEU tokenizer (tok zh) and convert all traditional characters generated by the model to simplified characters using HanziConv. We also evaluate our models using chrF (Popović, 2015) due to the recent criticism of BLEU score (Mathur et al., 2020); the results are listed in the Appendix A.3.1. + +# 5 Results + +Our goal is to evaluate the adaptability of $m^4$ Adapter on a variety of new domains and new language pairs simultaneously. In the meta-training stage, we perform meta-learning of the model on 180 DLPs, which contain 6 domains (EUbookshop, KDE, OpenSubtitles, QED, TED, Ubuntu) and 30 language pairs (en, et, mk, sr, hr, hu), comparing our approach to different baseline systems. In the meta-adaptation stage, we fine-tune both our model and the baselines to 3 domains (UN, Tanzil, Infopankki) and 30 language pairs (using ar, en, es, fr, ru, zh) of the same dataset simultaneously. Table 1 shows the results in the meta-training step and Table 2 presents the main results of our model in the meta-adaptation step compared to the baselines (results for all DLPs are in Appendix A.3.2). + +Motivated by Lai et al. (2022), we compare our approach to multiple baselines in terms of domain robustness. As shown in Table 1, $m^4$ Adapter obtains a performance that is on par or better than agnostic-adapter, which is a robust model. Note that $m^4$ Adapter also outperforms $m2m + tag$ , which was shown to be the most robust model in Cooper Stickland et al. (2021a). After showing empirically that we obtain a robust model, we verify its adaptability (see Table 2 and § 6.2.1) and + +language transfer ability (§ 6.2.2) through a series of experiments. + +As shown in Table 2, $m^4$ Adapter performs well when adapting to the meta-adaptation domains and language pairs at the same time. We observe that no baseline system outperforms the original m2m model. This implies that these models are unable to transfer language or domain knowledge from the MNMT model. One possible explanation is that these models already exhibit over-fitting and catastrophic forgetting when trained on meta-training domains and language pairs in such limited resource scenarios. + +Because of the unpredictability of the baseline systems' performance, it is difficult to draw reliable conclusions. For example, in the UN domain, meta-learning is on par with the original m2m model. However, performance on Tanzil and Infopankki is much worse than the one of the original m2m model. The agnostic-adapter also performs comparably with the original m2m model in the same domains, which shows that it is a robust model. Still, it obtains much worse performance on UN. In contrast, $m^4$ Adapter has a more stable performance when adapting to new domains and language pairs. + +In addition, $m^4$ Adapter has the ability to improve the performance of some DLPs on which baseline models obtain extremely low BLEU scores, especially in some distant domains. For example, in Tanzil-ar-ru, the traditional meta-learning method only gets 1.70 BLEU score, while $m^4$ Adapter gets 4.33. + +# 6 Analysis + +In this section, we conduct additional experiments to better understand the strengths of $m^4$ Adapter. We first investigate the benefits in terms of speed in $m^4$ Adapter training and adapting (Section 6.1), then investigate the cross-lingual domain transfer and cross-domain language transfer through an ablation study (Section 6.2). + +# 6.1 Efficiency of $m^4$ Adapter + +We compare the efficiency of baselines to traditional fine-tuning and list their number of trainable parameters and training/adapting time in Table 3. + +$m^4$ Adapter only updates the adapter parameters while freezing the MNMT model's parameters (just like agnostic-adapter). Therefore, it has fewer trainable parameters compared to fine-tuning $(0.75\%)$ of the parameters of the entire model). + +
DLP (meta-adaptation domain)specific DLP
UNTanzilInfopankkiUN-ar-enTanzil-ar-enInfopankki-ar-enUN-ar-ruTanzil-ar-ruInfopankki-ar-ru
m2m32.288.7217.4038.946.4422.5722.963.6415.05
m2m + FT29.938.2615.8835.116.8521.3319.103.0514.19
m2m + tag29.888.0615.9334.396.6320.1219.372.6513.68
agnostic-adapter30.568.4217.3636.136.1223.0820.643.6314.96
stack-adapter29.648.1417.1935.315.8322.1419.172.3413.85
meta-learning32.217.0216.7337.135.5018.9122.681.7015.23
m4Adapter33.539.8718.4339.058.5623.2125.224.3317.48
Δ+1.25+1.15+1.03+0.11+2.12+0.64+2.26+0.69+2.43
+ +Table 2: Main results on the meta-adaptation stage: average BLEU scores on all DLPs with different adaptation domain (left) and BLEU scores on some examples of specific DLP (right). $\Delta$ denotes improvement over $m2m$ . + +
Method#Param.TimeTTimeA
m2m418M (100%)--
m2m + FT418M (100%)100%100%
m2m + tag418M (100%)100%100%
agnostic-adapter3.17M (0.75%)42%150%
stack-adapterk·3.17M (k·0.75%)k·42%200%
meta-learning418M (100%)75%500%
m4Adapter3.17M (0.75%)34%300%
+ +Table 3: Number of trainable parameters and Training/Adapting time relative to fine-tuning. $k$ denotes the number of DLPs during the training process. + +Furthermore, the parameters of $m^4$ Adapter are significantly fewer than those of stack-adapter, which are $k$ times larger than those of standard adapter-based approaches. This happens because domain adapters and language pair adapters must be trained in each DLP when training the stack-adapter model. Adapter-based approaches train $34\% - 42\%$ faster than fine-tuning due to parameter efficiency. The adaptation time of $m^4$ Adapter, on the other hand, is often longer since it requires updating the high-level gradient. Our approach requires more time than traditional adapter methods but is faster compared with updating the entire model using traditional meta-learning. For example, the adaptation time for $m2m + FT$ is $40s$ , while for $m^4$ Adapter it is $120s$ , which is still a lot faster than standard meta-learning (200s). + +# 6.2 Ablation Study + +We conduct a number of experiments with extensive analysis to validate the domain transfer ability of the $m^4$ Adapter across different language pairs (§ 6.2.1), as well as the language transfer ability across multiple domains (§ 6.2.2). + +# 6.2.1 Domain Transfer via Languages + +To investigate the capacity of our models to transfer domain knowledge across different languages, we define domain transfer via languages, i.e., the abl + +ity to transfer domains while keeping the languages unchanged. We first fine-tune the MNMT model in some of the meta-training domains under the specified language pair, and then we adapt these trained models to new meta-adaptation domains of the same language pair. To be more specific, we first choose 6 languages (en, et, mk, sr, hr, hu), forming 30 language pairs. Then, we choose six of these seven domains (EUbookshop, KDE, OpenSubtitles, QED, TED, Ubuntu, Bible) across all selected 30 language pairs as the meta-training dataset (180 DLPs) to fine-tune the MNMT model, and another one domain as the adapting domain across all selected language pairs (30 DLPs) to evaluate the adaptability of the fine-tuned MNMT model to the new domain. Table 4 provides the results for domain transfer across languages. + +From Table 4, we observe that almost all baseline systems and $m^4$ Adapter outperform the original $m2m$ model (except for the EUbookshop domain), indicating that the model encodes language knowledge and can transfer this knowledge to new meta-adaptation domains. Our approach is comparable to the performance of agnostic-adapter, which performs the best among all baseline systems. + +We also discover that domain transfer through languages is desirable in some distant domains. For example, the original m2m model only got BLEU scores of 2.01 and 19.01 in the Bible and Open-Subtitles domain (hr-sr language pair). However, domain transfer through $m^4$ Adapter resulted in a considerable performance boost and achieved a BLEU score of 13.69 and 54.30. + +We notice that none of the baselines outperforms the original m2m model in the EUbookshop domain, which means that the language knowledge learned from the baseline model does not transfer to this particular domain. Our approach, on the other hand, has a strong domain transfer ability. We investigated the reason, which was caused + +
meta-adaptation domainspecific DLP (hr-sr)
EUbookshopKDEOpenSubtitlesQEDTEDUbuntuBibleEUbookshopKDEOpenSubtitlesQEDTEDUbuntuBible
m2m17.7722.0514.1318.3416.2020.629.8011.4325.3719.0112.258.1422.332.01
m2m + FT12.7324.5616.2220.4618.7431.3211.309.7921.0553.3423.8720.8134.0812.57
m2m + tag13.0325.3416.1217.7517.0426.2911.4910.1329.6449.5419.7820.4334.1513.25
agnostic-adapter16.2425.8517.9021.7120.0831.5311.759.0530.6454.0422.7921.1928.8310.59
stack-adapter13.2524.1917.2119.5618.3728.2710.3810.5524.5042.9422.0220.9525.4110.14
meta-learning13.6124.9116.2217.7016.4024.9311.847.9027.8552.5020.4119.0031.2410.42
mAdapter18.9925.2217.9421.7119.8631.3712.1212.0530.4954.3023.9221.3233.7113.69
Δ+2.75-0.63+0.04+0.00-0.22-0.16+0.37+3.00-0.15+0.26+1.13+0.13+4.88+3.1
+ +Table 4: Domain transfer via languages: average BLEU scores on all DLPs in each meta-adaptation domain (left) and BLEU scores on a random selection of one specific DLP in $hr - sr$ (right). $\Delta$ denotes improvement over agnostic-adapter. + +
meta-adaptation language pairspecific DLP (de-en)
de-enen-frfi-ukis-ltEUbookshopKDEOpenSubtitlesQEDTEDUbuntu
m2m24.5229.2012.3412.5519.5926.4815.8926.3428.1430.65
m2m + FT23.2924.4411.299.5916.0423.1713.3421.3926.2039.59
m2m + tag22.5224.9711.7111.2215.8623.6711.7220.6425.9737.25
agnostic-adapter28.3330.9315.4214.3820.1628.7217.9727.6633.6341.89
stack-adapter23.3724.9611.5111.0916.1422.5113.8422.2927.6736.73
meta-learning25.0828.2613.4012.8317.8821.2016.3224.9630.3239.81
m4Adapter28.3730.8015.2414.0520.2028.1918.0627.1833.3243.24
Δ+0.04-0.13-0.18-0.33+0.04-0.53+0.09-0.48-0.31+1.35
+ +Table 5: Language transfer via domains: average BLEU scores on all DLPs in each meta-adaptation language pair (left) and BLEU scores on one specific DLP in de-en (right). $\Delta$ denotes the improvement over the agnostic-adapter. + +by a significant overfitting issue while adapting to the EUbookshop domain. The previous fine-tuning strategy converged too early, resulting in significant overfitting of the model to the meta-training dataset, which performed exceedingly badly in adapting to the new domain (see the loss decline curve in Appendix A.5 for more details). This phenomenon is also consistent with our previous findings ( $\S 5$ ) that our approach is more stable than the baseline systems in adapting to new domains. + +# 6.2.2 Language Transfer via Domains + +To study the ability of our model to transfer language knowledge across different domains, we define language transfer via domains, i.e., the ability to transfer languages while keeping the domains unchanged. To this end, we first fine-tune the MNMT model in some meta-training DLPs, and then we adapt these trained models to meta-adaptation language pairs of the same domains. To achieve this, we first select 180 DLPs as the meta-training dataset to train the model, which contains 6 domains (EUbookshop, KDE, OpenSubtitles, QED, TED, Ubuntu) and 30 language pairs (en, et, mk, sr, hr, hu); then adapt these trained model to 4 of the meta-adaptation language pairs (de-en, en-fr, fluk, is-lt). The findings of language transfer across domains are shown in Table 5. + +According to Table 5, the performance of + +traditional fine-tuning approaches $(m2m + FT, m2m + tag)$ are poorer than the original m2m model, which means that these methods do not transfer the learned domain knowledge to the new meta-adaptation language pair. This meets our expectation since m2m is trained on a big dataset and learns a great quantity of linguistic information, which limits its capacity to transfer language information in small datasets. This explanation can be demonstrated by the results of the meta-learning approach. As shown in Table 5, meta-learning yields slightly higher BLEU scores compared to the original m2m model, which arguably supports the conclusion that the original m2m model already has strong linguistic information. These small improvements from meta-learning can be attributed to leveraging the limited data available. + +In contrast, adapter-based methods (agnosticadapter and $m^4$ Adapter) permit cross-lingual transfer across domains. $m^4$ Adapter shows a performance that is on par or better than the agnosticadapter, the most competitive model in all baseline systems. The results of the stack-adapter show that it cannot perform language transfer across domains through naively stacking domain adapters and language adapters. This is consistent with the conclusions of Cooper Stickland et al. (2021a). + +Similarly, $m^4$ Adapter has demonstrated significant language transfer ability in distant domains. In + +ubunta-de-en, for example, $m^4$ Adapter achieves a BLEU score of 43.24, which is significantly higher than the original m2m model's BLEU of 30.65. + +# 7 Conclusion + +We present $m^4$ Adapter, a novel multilingual multi-domain NMT adaptation framework which combines meta-learning and parameter-efficient finetuning with adapters. $m^4$ Adapter is effective on adapting to new languages and domains simultaneously in low-resource settings. We find that $m^4$ Adapter also transfers language knowledge across domains and transfers domain information across languages. In addition, $m^4$ Adapter is efficient in training and adaptation, which is practical for online adaptation (Etchegoyhen et al., 2021) to complex scenarios (new languages and new domains) in the real world. + +# 8 Limitations + +This work has two main limitations. i) We have only evaluated the proposed method on limited and balanced bilingual training data to simulate the low-resource scenario. However, some domains in our setting are in fact highly imbalanced. ii) We only evaluated $m^4$ Adapter on machine translation, perhaps it would be plausible to expand our method to other NLP tasks, such as text generation or language modeling. Since our framework leverages a multilingual pretrained model and only trains adapters, we believe it could easily be applied to other tasks besides MT. + +# Acknowledgement + +This work was supported by funding to Wen Lai's PhD research from LMU-CSC (China Scholarship Council) Scholarship Program. This work has received funding from the European Research Council under the European Union's Horizon 2020 research and innovation program (grant agreement #640550). This work was also supported by the DFG (grant FR 2829/4-1). + +# References + +Roee Aharoni and Yoav Goldberg. 2020. Unsupervised domain clusters in pretrained language models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7747-7763, Online. Association for Computational Linguistics. + +Roee Aharoni, Melvin Johnson, and Orhan First. 2019. Massively multilingual neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3874-3884, Minneapolis, Minnesota. Association for Computational Linguistics. +Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450. +Ankur Bapna and Orhan First. 2019. Simple, scalable adaptation for neural machine translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1538-1548, Hong Kong, China. Association for Computational Linguistics. +Yanda Chen, Ruiqi Zhong, Sheng Zha, George Karypis, and He He. 2022. Meta-learning via language model in-context tuning. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 719-730, Dublin, Ireland. Association for Computational Linguistics. +Alexandra Chronopoulou, Dario Stojanovski, and Alexander Fraser. 2022. Language-family adapters for multilingual neural machine translation. arXiv preprint arXiv:2209.15236. +Chenhui Chu and Raj Dabre. 2019. Multilingual multi-domain adaptation approaches for neural machine translation. arXiv preprint arXiv:1906.07978. +Chenhui Chu and Rui Wang. 2018. A survey of domain adaptation for neural machine translation. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1304-1319, Santa Fe, New Mexico, USA. Association for Computational Linguistics. +Asa Cooper Stickland, Alexandre Berard, and Vassilina Nikoulina. 2021a. Multilingual domain adaptation for NMT: Decoupling language and domain information with adapters. In Proceedings of the Sixth Conference on Machine Translation, pages 578-598, Online. Association for Computational Linguistics. +Asa Cooper Stickland, Xian Li, and Marjan Ghazvininejad. 2021b. Recipes for adapting pre-trained monolingual and multilingual models to machine translation. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 3440-3453, Online. Association for Computational Linguistics. +Praveen Dakwale and Christof Monz. 2017. Finetuning for neural machine translation with limited degradation across in-and out-of-domain data. Proceedings of the XVI Machine Translation Summit, 117. + +Tobias Domhan and Felix Hieber. 2017. Using target-side monolingual data for neural machine translation through multi-task learning. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1500-1505, Copenhagen, Denmark. Association for Computational Linguistics. +Zi-Yi Dou, Keyi Yu, and Antonios Anastasopoulos. 2019. Investigating meta-learning algorithms for low-resource natural language understanding tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1192-1197, Hong Kong, China. Association for Computational Linguistics. +Thierry Etchegoyhen, David Ponce, Harritxu Gete, and Victor Ruiz. 2021. Online learning over time in adaptive neural machine translation. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021), pages 411-420, Held Online. INCOMA Ltd. +Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, et al. 2021. Beyond english-centric multilingual machine translation. Journal of Machine Learning Research, 22(107):1-48. +Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. In International conference on machine learning, pages 1126-1135. PMLR. +Markus Freitag and Yaser Al-Onaizan. 2016. Fast domain adaptation for neural machine translation. arXiv preprint arXiv:1612.06897. +Jiatao Gu, Yong Wang, Yun Chen, Victor O. K. Li, and Kyunghyun Cho. 2018. Meta-learning for low-resource neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3622-3631, Brussels, Belgium. Association for Computational Linguistics. +Jiatao Gu, Yong Wang, Kyunghyun Cho, and Victor O.K. Li. 2019. Improved zero-shot neural machine translation via ignoring spurious correlations. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1258-1268, Florence, Italy. Association for Computational Linguistics. +Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for nlp. In International Conference on Machine Learning, pages 2790-2799. PMLR. + +Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Viégas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google's multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5:339-351. +Huda Khayrallah, Gaurav Kumar, Kevin Duh, Matt Post, and Philipp Koehn. 2017. Neural lattice search for domain adaptation in machine translation. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 20-25, Taipei, Taiwan. Asian Federation of Natural Language Processing. +Catherine Kobus, Josep Crego, and Jean Senellart. 2017. Domain control for neural machine translation. In Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP 2017, pages 372-378, Varna, Bulgaria. INCOMA Ltd. +Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tokenizer and tokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71, Brussels, Belgium. Association for Computational Linguistics. +Wen Lai, Jindrich Libovický, and Alexander Fraser. 2022. Improving both domain robustness and domain adaptability in machine translation. In Proceedings of the 29th International Conference on Computational Linguistics, pages 5191-5204, Gyeongju, Republic of Korea. International Committee on Computational Linguistics. +Hung-yi Lee, Shang-Wen Li, and Ngoc Thang Vu. 2022. Meta learning for natural language processing: A survey. arXiv preprint arXiv:2205.01500. +Ilya Loshchilov and Frank Hutter. 2018. Decoupled weight decay regularization. In International Conference on Learning Representations. +Nitika Mathur, Timothy Baldwin, and Trevor Cohn. 2020. Tangled up in BLEU: Reevaluating the evaluation of automatic machine translation evaluation metrics. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4984-4997, Online. Association for Computational Linguistics. +Michael McCloskey and Neal J Cohen. 1989. Catastrophic interference in connectionist networks: The sequential learning problem. In Psychology of learning and motivation, volume 24, pages 109-165. Elsevier. +Graham Neubig and Junjie Hu. 2018. Rapid adaptation of neural machine translation to new languages. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages + +875-880, Brussels, Belgium. Association for Computational Linguistics. +Alex Nichol, Joshua Achiam, and John Schulman. 2018. On first-order meta-learning algorithms. arXiv preprint arXiv:1803.02999. +Cheonbok Park, Hantae Kim, Ioan Calapodescu, Hyun Chang Cho, and Vassilina Nikoulina. 2022. DaLC: Domain adaptation learning curve prediction for neural machine translation. In *Findings of the Association for Computational Linguistics: ACL* 2022, pages 1789–1807, Dublin, Ireland. Association for Computational Linguistics. +Jaehong Park, Jongyoon Song, and Sungroh Yoon. 2017. Building a neural machine translation system using only synthetic parallel data. arXiv preprint arXiv:1704.00253. +Jonas Pfeiffer, Andreas Rückle, Clifton Poth, Aishwarya Kamath, Ivan Vulic, Sebastian Ruder, Kyunghyun Cho, and Iryna Gurevych. 2020. AdapterHub: A framework for adapting transformers. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 46-54, Online. Association for Computational Linguistics. +Jerin Philip, Alexandre Berard, Matthias Galle, and Laurent Besacier. 2020. Monolingual adapters for zero-shot neural machine translation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4465-4470, Online. Association for Computational Linguistics. +Maja Popovic. 2015. *chrF: character n-gram F-score* for automatic MT evaluation. In *Proceedings of the Tenth Workshop on Statistical Machine Translation*, pages 392–395, Lisbon, Portugal. Association for Computational Linguistics. +Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186-191, Brussels, Belgium. Association for Computational Linguistics. +Jeff Rasley, Samyam Rajbhandari, Olatunj Ruwase, and Yuxiong He. 2020. Deepspeed: System optimizations enable training deep learning models with over 100 billion parameters. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 3505-3506. +Andreas Rückle, Gregor Geigle, Max Glockner, Tilman Beck, Jonas Pfeiffer, Nils Reimers, and Iryna Gurevych. 2021. AdapterDrop: On the efficiency of adapters in transformers. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7930-7946, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. + +Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 86-96, Berlin, Germany. Association for Computational Linguistics. +Amr Sharaf, Hany Hassan, and Hal Daumé III. 2020. Meta-learning for few-shot NMT adaptation. In Proceedings of the Fourth Workshop on Neural Generation and Translation, pages 43-53, Online. Association for Computational Linguistics. +Pawel Swietojanski and Steve Renals. 2014. Learning hidden unit contributions for unsupervised speaker adaptation of neural network acoustic models. In 2014 IEEE Spoken Language Technology Workshop (SLT), pages 171-176. IEEE. +Ishan Tarunesh, Sushil Khyalia, Vishwajeet Kumar, Ganesh Ramakrishnan, and Preethi Jyothi. 2021. Meta-learning for effective multi-task and multilingual modelling. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 3600-3612, Online. Association for Computational Linguistics. +Jörg Tiedemann. 2012. Parallel data, tools and interfaces in OPUS. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12), pages 2214-2218, Istanbul, Turkey. European Language Resources Association (ELRA). +Marlies van der Wees, Arianna Bisazza, and Christof Monz. 2017. Dynamic data selection for neural machine translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1400-1410, Copenhagen, Denmark. Association for Computational Linguistics. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc. +David Vilar. 2018. Learning hidden unit contribution for adapting neural machine translation models. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 500-505, New Orleans, Louisiana. Association for Computational Linguistics. +Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. + +In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics. + +Runzhe Zhan, Xuebo Liu, Derek F Wong, and Lidia S Chao. 2021. Meta-curriculum learning for domain adaptation in neural machine translation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 14310-14318. + +Biao Zhang, Philip Williams, Ivan Titov, and Rico Senrich. 2020. Improving massively multilingual neural machine translation and zero-shot translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1628-1639, Online. Association for Computational Linguistics. + +# A Appendix + +# A.1 Datasets + +All datasets used in our experiments are publicly available on OPUS. Despite the fact that OPUS contains corpora from various domains and languages, some recent works (Aharoni and Goldberg, 2020; Lai et al., 2022) have raised concerns about using OPUS corpora as they can be noisy. We therefore performed the following cleaning and filtering preprocess on the original OPUS corpus: i) remove sentences that contain more than $50\%$ punctuation; ii) to ensure that the training set did not contain any corpora from the validation or test sets, all corpora were de-duplicated; iii) sentences longer than 175 tokens were removed; iv) we used a language detection tool11 (langid) to filter out sentences with mixed languages. + +As described in Section 4, during the training phase, although most of the DLPs were limited to a maximum of 5000 sentences, there was still a fraction of DLPs with a corpus of less than 5000 samples which we list it in Table 6. + +# A.2 Model Configuration + +Our $m^4$ Adapter model is trained in the following way: it first samples $m$ tasks based on temperature $\tau$ , then makes $k$ gradient updates for each task $\mathcal{T}_i$ . Finally, it updates the parameters of $\psi$ . In our set of experiments, we use the AdamW (Loshchilov and Hutter, 2018) optimizer, which is shared across all DLPs. We fix the initial learning rate to $5e - 5$ with a dropout probability 0.1. In our experiments, we consider values of $m \in \{4, 8, 16\}$ , $k \in \{1, 2, 3, 4, 5\}$ , $\alpha \in \{0.1, 0.5, 1.0\}$ and $\tau \in \{1, 2, 5, \infty\}$ + +
DLP#Num.DLP#Num.
EUbookshop-hu-sr59Ubuntu-hu-sr140
EUbookshop-hu-mk976Ubuntu-hr-sr438
EUbookshop-en-sr1104Ubuntu-hr-hu479
EUbookshop-et-sr1141Ubuntu-et-sr912
EUbookshop-hr-sr1280Ubuntu-en-sr1519
EUbookshop-mk-sr1320Ubuntu-et-mk1545
EUbookshop-hr-hu1328Ubuntu-hr-mk1880
EUbookshop-en-mk1836Ubuntu-mk-sr2091
EUbookshop-et-mk2000Ubuntu-hu-mk2118
EUbookshop-hr-mk2003Ubuntu-et-hu2147
EUbookshop-et-hr2861Ubuntu-et-hr2542
EUbookshop-en-hr4668Ubuntu-en-mk2644
--Ubuntu-en-et4998
--Ubuntu-en-hu4999
+ +Table 6: Data statistics (number of sentences) of DLPs that contain less than 5000 sentences. + +
UNTanzilInfopankki
τ = 133.539.8718.43
τ = 233.529.8118.46
τ = 533.339.7718.19
τ = ∞33.449.8018.44
+ +Table 7: Different temperature settings. + +and choose the best setting ( $m = 8$ , $k = 3$ , $\beta = 1.0$ , $\tau = 1$ ) based on the average BLEU scores over all DLPs. Each $m^4$ Adapter model is trained for 3 epochs and adapts to each DLP for 1 epoch to simulate a fast adaptation scenario. + +# A.3 Additional Results + +# A.3.1 chrF Evaluation + +In addition to BLEU, we also usechrF (Popovic, 2015) as an evaluation metric. Tables 9, 10 and 11 show the results. $m^4$ Adapter is more effective than all baseline systems in terms ofchrF, which is consistent with the BLEU scores (that were presented in Tables 2, 4 and 5). + +# A.3.2 Results on all DLPs + +Figure 2 reports the results for all DLPs, which is consistent with the results in Tables 2 and 9. + +# A.4 Analysis + +To better understand our proposed method, we investigate the effect of different parameter settings on the results (as described in Section 3.1.2). We also analyse the poor results on EUbookshop domain as described in Section 6.2.1. + +
shotsavg BLEU
2-shots23.80
4-shots23.88
8-shots23.89
16-shots23.85
32-shots23.88
+ +Table 8: Different amounts of shots. + +# A.4.1 Effect of temperature sampling + +Although the meta-training data of all DLPs is limited to a maximum of 5000 sentences, there are still some DLPs with less than 5000 sentences, so we use temperature sampling for each setting for $\tau = 1,2,5$ and $\infty$ . We first sample the task-based temperature and show the results in Table 7. We notice that the performance of the various temperature settings is very similar. These results meet our expectation as the data we used was limited to a maximum of 5000 sentences in most DLPs, with the exception of some DLPs in the EUbookshop and Ubuntu domains (see Appendix A.1), which means data is sampled uniformly in different temperature settings. + +# A.4.2 Effect of different shots + +We also test the performance on different numbers of shots ( $n = 2, 4, 6, 8, 16$ ) and show the results in Table 8. Interestingly, we observe that $m^4$ Adapter is not sensitive to different numbers of shots, unlike other NLP (Chen et al., 2022) and Computer Vision tasks (Finn et al., 2017) which use the meta-learning approach. We argue that this is because the meta-adapter is randomly initialized at each batch, resulting in a gap between training and inference. Narrowing this gap is an important future research direction. + +# A.5 Analysis on EUbookshop domain + +As described in Section 6.2.1, we observed that all baseline systems overfit when trained on data from the EUbookshop domain. For example, in the case of the $m2m + FT$ baseline, the training loss converges and stops improving at a very early stage. After that, the model overfits the validation set (Figure 2). On the contrary, the training loss of the $m^4$ Adapter does not show signs of overfitting. This is probably due to the much smaller number of parameters that our proposed model trains. + +![](images/36351a79fa49d90de3aecdaaca00c18ffb800a2c87b698260cfb7fdafc02d49f.jpg) +(a) $\mathrm{m}2\mathrm{\;m} + \mathrm{{FT}}$ + +![](images/04aff9477966953720a4b81c382c2785a5886befa692a9cf5483c3249ddaf4b2.jpg) +(b) $m^4$ Adapter +Figure 2: Training loss of $m2m + FT$ and $m^4$ Adapter in EUbookshop domain. + +
UN38.9433.1430.7522.9628.7843.9137.5231.2932.0439.2728.8428.7628.329.8529.9
Tanzil6.444.287.763.646.0412.0514.145.5910.5616.365.3511.166.3612.48.71
Infopankki22.5718.5921.9415.0513.1419.9323.717.1913.0823.7618.1210.0719.7510.9813.06
+ +(a) $\mathrm{m}2\mathrm{m}$ + +
UN35.1131.9525.5319.126.2642.7234.4529.2132.5137.2526.5327.124.0628.5628.55
Tanzil6.855.776.593.055.1812.7912.614.2610.7813.984.411.285.3211.929.09
Infopankki21.3315.5518.7614.1913.0618.1618.8517.812.1521.115.649.6117.4511.1613.36
+ +(b) $\mathrm{m}2\mathrm{m} + \mathrm{FT}$ + +
UN-Tanzili-Infopankki34.3930.7726.5519.3726.4643.7534.3829.9332.6837.8526.9927.823.828.2825.22
6.635.376.882.655.1711.4711.84.6610.8213.494.3312.095.1612.078.28
20.1214.7718.3913.6814.619.3520.6516.913.3519.9215.910.4417.410.9512.47
#r#n#es#r#r#r#u#r#h#es#r#r#en#u#en#h#s#r#es#u#es#h#r#u#r#h#u#h
+ +(c) $\mathrm{m}2\mathrm{\;m} + \mathrm{{tag}}$ + +
UN-Tanzil-Infopankki36.1331.3728.5420.6429.0941.3433.8928.631.936.2826.4328.1624.9630.0631.08
6.123.076.713.636.6210.8714.235.5710.0616.544.9210.886.3812.618.14
23.2118.9221.3914.9613.8320.8322.518.412.9621.8616.8410.6919.7910.8813.36
#r#en#res#fr##ru#zh#es#nf#nu#zh#fr#zu#sz#ru#zh#z#
+ +(d) agnostic-adapter + +
UN-Tanzil-Infopankki35.3130.5426.5119.1726.243.1734.5929.7132.9137.126.1427.2924.2328.2823.45
5.835.246.842.345.0912.3112.714.2810.2813.124.9611.545.8611.310.35
22.1418.1722.0313.8513.4419.4123.2817.7413.2723.5618.6210.1719.2810.3812.51
#r#en#res#r#r#r#u#r#h#res#r#r#r#u#r#h#r#r#r#u#r#h#r#u#r#h#r#h
+ +(e) stack-adapter + +![](images/746ca23fadffd70e420f85daa4047438fa752d3022ce971f27faef6dc13bd89e.jpg) +(f) Meta-Learning + +![](images/c37cf4481621197530450b4dd77c000c85a72a6089441c674b0a96e792f0f15d.jpg) +(g) $m^4$ Adapter +Figure 2: Main result: BLEU scores in all DLPs + +
DLP (meta-adaptation domain)specific DLP
UNTanzilInfopankkiUN-ar-enTanzil-ar-enInfopankki-ar-enUN-ar-ruTanzil-ar-ruInfopankki-ar-ru
m2m0.4800.2270.3770.6020.2800.4790.4840.1910.450
m2m + FT0.4730.2030.3480.5920.2490.4660.4730.1540.401
m2m + tag0.4730.2030.3440.5900.2550.4480.4740.1520.400
agnostic-adapter0.4750.2280.3700.6150.2420.4880.4860.2170.431
stack-adapter0.4720.2070.3680.5930.2430.4760.4730.1510.405
meta-learning0.4870.2030.3490.6120.2780.4540.4830.1650.428
m4Adapter0.5250.2300.3840.6490.2990.4910.5360.2280.521
+ +Table 9: Main results on the meta-adaptation stage: average chrF scores on all DLPs with different adaptation domain (left) and chrF scores on some examples of specific DLP (right). + +
meta-adaptation domainspecific DLP (hr-sr)
EUbookshopKDEOpenSubtitlesQEDTEDUbuntuBibleEUbookshopKDEOpenSubtitlesQEDTEDUbuntuBible
m2m0.4460.4170.3390.4200.4080.4760.1290.3610.4320.2840.2040.1460.4950.025
m2m + FT0.3780.4440.3580.4440.4450.5670.1440.3530.4730.6770.4230.4290.5630.138
m2m + tag0.3880.4450.3590.4140.4280.5200.1350.3590.5020.6710.3600.4280.5810.118
agnostic-adapter0.4190.4600.3850.4560.4610.5680.1440.2790.5070.6130.3880.4150.5540.127
stack-adapter0.3820.4360.3900.4380.4410.5460.1340.3580.4270.5620.3810.4270.5260.124
meta-learning0.3870.4400.3600.4120.4220.5090.1420.2370.5020.6760.3530.4040.5460.139
m4Adapter0.4970.4520.3860.4560.4570.5650.1480.3690.5040.6790.4270.4310.5780.143
+ +Table 10: Domain transfer via languages: average chrF scores on all DLPs in each meta-adaptation domain (left) and chrF scores on random select one specific DLP in hr-sr (right). + +
meta-adaptation language pairspecific DLP(de-en)
de-enen-frfi-ukis-ltEUbookshopKDEOpenSubtitlesQEDTEDUbuntu
m2m0.1160.1300.3270.3200.1710.1040.0930.1070.1320.095
m2m + FT0.1120.0940.2530.2430.1640.0910.0890.1050.1340.090
m2m + tag0.0940.0960.2580.2610.1400.0670.0820.0880.1160.077
agnostic-adapter0.1160.1270.3430.3310.1680.1020.0930.1080.1340.092
stack-adapter0.1130.0960.2560.2580.1640.0870.0880.1050.1300.075
meta-learning0.1150.1250.3170.3090.1700.1010.0920.1080.1330.092
m4Adapter0.1170.1310.3420.3330.1740.1070.0950.1080.1340.097
+ +Table 11: Language transfer via domains: average chrF scores on all DLPs in each meta-adaptation language pair (left) and chrF scores on one specific DLP in de-en (right). \ No newline at end of file diff --git a/m4adaptermultilingualmultidomainadaptationformachinetranslationwithametaadapter/images.zip b/m4adaptermultilingualmultidomainadaptationformachinetranslationwithametaadapter/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..c891afc97213ea6d0f6b84f25f1adcf9742ed765 --- /dev/null +++ b/m4adaptermultilingualmultidomainadaptationformachinetranslationwithametaadapter/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f782f068c7a6bd2bfd982dc867090ffb03ed7a767186f628fbd7a423902ed832 +size 845707 diff --git a/m4adaptermultilingualmultidomainadaptationformachinetranslationwithametaadapter/layout.json b/m4adaptermultilingualmultidomainadaptationformachinetranslationwithametaadapter/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..b4192b11546eea318c5bdec329a3596ac3804ed4 --- /dev/null +++ b/m4adaptermultilingualmultidomainadaptationformachinetranslationwithametaadapter/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:daf69d8e6451dc11635d2680fe9dc5aff2c0e11bd606832a9277a2e4ceccc21d +size 562878 diff --git a/xdocunifiedpretrainingforcrossformatdocumentunderstanding/5b3cf8ea-26c9-41ac-9870-ebf099674f6a_content_list.json b/xdocunifiedpretrainingforcrossformatdocumentunderstanding/5b3cf8ea-26c9-41ac-9870-ebf099674f6a_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..05ed2d8cd36a2cf45b1e9099d03150ba3779b930 --- /dev/null +++ b/xdocunifiedpretrainingforcrossformatdocumentunderstanding/5b3cf8ea-26c9-41ac-9870-ebf099674f6a_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0a59b7e203bd5be89bd805696b28bb162fd2a3c618818be76ecb9247ff471638 +size 75868 diff --git a/xdocunifiedpretrainingforcrossformatdocumentunderstanding/5b3cf8ea-26c9-41ac-9870-ebf099674f6a_model.json b/xdocunifiedpretrainingforcrossformatdocumentunderstanding/5b3cf8ea-26c9-41ac-9870-ebf099674f6a_model.json new file mode 100644 index 0000000000000000000000000000000000000000..f4d6c904275dbbe0dc6f4462ded794071ab4d0fb --- /dev/null +++ b/xdocunifiedpretrainingforcrossformatdocumentunderstanding/5b3cf8ea-26c9-41ac-9870-ebf099674f6a_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:69ea2bafa29dc69c40bf9bae288269203cf61ad86e94a3bcf14d2dac7e73b6c5 +size 93813 diff --git a/xdocunifiedpretrainingforcrossformatdocumentunderstanding/5b3cf8ea-26c9-41ac-9870-ebf099674f6a_origin.pdf b/xdocunifiedpretrainingforcrossformatdocumentunderstanding/5b3cf8ea-26c9-41ac-9870-ebf099674f6a_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..fd6deeb0ac4a2548ebe7a2f4fa21e6ac4624fb01 --- /dev/null +++ b/xdocunifiedpretrainingforcrossformatdocumentunderstanding/5b3cf8ea-26c9-41ac-9870-ebf099674f6a_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bb49a7bfe81de9afb24bb263eae823445c66aa36f89efb980dd1c096a32fa060 +size 926649 diff --git a/xdocunifiedpretrainingforcrossformatdocumentunderstanding/full.md b/xdocunifiedpretrainingforcrossformatdocumentunderstanding/full.md new file mode 100644 index 0000000000000000000000000000000000000000..9fcb540ccc4a7bd1bbef4d6944a7770183f05e73 --- /dev/null +++ b/xdocunifiedpretrainingforcrossformatdocumentunderstanding/full.md @@ -0,0 +1,301 @@ +# XDoc: Unified Pre-training for Cross-Format Document Understanding + +# Jingye Chen*, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei +Microsoft Corporation + +{v-jingyechen,tengchaolv,lecu,chazhang,fuwei}@microsoft.com + +# Abstract + +The surge of pre-training has witnessed the rapid development of document understanding recently. Pre-training and fine-tuning framework has been effectively used to tackle texts in various formats, including plain texts, document texts, and web texts. Despite achieving promising performance, existing pre-trained models usually target one specific document format at one time, making it difficult to combine knowledge from multiple document formats. To address this, we propose XDoc, a unified pre-trained model which deals with different document formats in a single model. For parameter efficiency, we share backbone parameters for different formats such as the word embedding layer and the Transformer layers. Meanwhile, we introduce adaptive layers with lightweight parameters to enhance the distinction across different formats. Experimental results have demonstrated that with only $36.7\%$ parameters, XDoc achieves comparable or even better performance on a variety of downstream tasks compared with the individual pre-trained models, which is cost effective for real-world deployment. The code and pre-trained models are publicly available at https://aka.ms/xdoc. + +# 1 Introduction + +Document understanding has undoubtedly been an important research topic as documents play an essential role in message delivery in our daily lives (Cui et al., 2021). During the past several years, the flourishing blossom of deep learning has witnessed the rapid development of document understanding in various formats, ranging from plain texts (Devlin et al., 2018; Liu et al., 2019; Dong et al., 2019), document texts (Xu et al., 2020, 2021a; Huang et al., 2022), and web texts (Chen et al., 2021; Li et al., 2022a; Wang et al., 2022b). Recently, pretraining techniques have been the de facto standard + +![](images/1ab730f6f36f2142f2edca733240ddb72a694d4e96d26be769ffef00b43601f2.jpg) +Figure 1: Pre-trained models for different document formats. Most of the structures are similar (word embedding, 1D position embedding, and Transformer layers) while only a small proportion of the structures (2D position and XPaths embedding) are different. + +for document understanding, where the model is first pre-trained in a self-supervised manner (e.g. using masked language modeling as the pretext task (Devlin et al., 2018)) with large-scale corpus, then fine-tuned on a series of downstream tasks like question-answering (Rajpurkar et al., 2016; Mathew et al., 2021), key information extraction (Jaume et al., 2019; Xu et al., 2022) and many others. Albeit achieving impressive performance on specific tasks, existing pre-trained models are far from flexible as they can only tackle texts in a single format (e.g. LayoutLM (Xu et al., 2020) is designed for tackling document texts and is not suitable for web texts). This makes it difficult to combine knowledge from multiple document formats. Meanwhile, the category of pre-trained models will keep increasing if more formats (e.g. Word and PowerPoint) are further studied in academia. + +Among different pre-trained models for document understanding, it is observed that many pre + +![](images/521fb15e6e630234f204f16facd0f4bdbc83ea62d0a23909692550ee87cd7ca9.jpg) +(a) An illustration of plain text + +![](images/887ff4684dfb7952312ac98b1ac1e6daa169a01f4056231e6d9b6d22c56c72fb.jpg) +Figure 2: Illustrations of three document formats. For each format, the corresponding meta-information is shown in the dash boxes. Please note that the text content and 1D position are common attributes across three formats while 2D position and XPath strings (marked as red) are specific for document and web texts respectively. + +![](images/e5b5976cdf0693ed009f441bf05a424df9eb50def52c21dc6c305e02988c2813.jpg) +(b) An illustration of document text +(c) An illustration of web text + +trained models share a similar architecture, such as a word embedding layer, a 1D position embedding layer, and Transformer layers (see Figure 1). In contrast, there are also different parts serving as prior knowledge for a specific format (e.g. two-dimensional coordinates for document texts and XPaths for web texts). Intuitively, we find that the parameters of different parts are far less than the parameters of the shared backbones. For instance, $\text{LayoutLM}_{\text{BASE}}$ (Xu et al., 2020) based on RoBERTa (Liu et al., 2019) consists of 131M parameters while the 2D position embedding layer only contains 3M parameters (2.3%). Similarly, $\text{MarkupLM}_{\text{BASE}}$ (Li et al., 2022a) based on RoBERTa has 138M parameters while the XPath embedding layer only contains 11M parameters (8.0%). Therefore, it is indispensable to design a unified pre-trained model for various text formats while sharing backbone parameters to make models more compact. + +To this end, we propose XDoc, a unified architecture with multiple input heads designed for various categories of documents. For the sake of parameter efficiency, we share the backbone network architecture across different formats, including the word embedding layer, the 1D position embedding layer, and dense Transformer layers. Considering that the different parts only take up a small proportion in XDoc, we introduce adaptive layers to make the representation learning for different formats more robust. We collect the large-scale training samples for different document formats, and leverage masked language modeling to pre-train XDoc. Specifically, we use three widely-used document formats for experiments, including plain, document, and web texts (see Figure 2 for more details). To verify the model accuracy, we select the GLUE + +benchmark (Wang et al., 2019) and SQuAD (Rajpurkar et al., 2016, 2018) to evaluate plain text understanding, FUNSD (Jaume et al., 2019) and DocVQA (Mathew et al., 2021) to evaluate document understanding, and WebSRC (Chen et al., 2021) for web text understanding. Experimental results have demonstrated that XDoc achieves comparable or even better performance on these tasks while maintaining the parameter efficacy. + +The contributions of this paper are summarized as follows: + +- We propose XDoc, a unified pre-trained model that tackles texts in various formats in pursuit of parameter efficiency. +- Pre-trained with only masked language modeling task, XDoc achieves comparable or even better accuracy on various downstream tasks. +- The code and pre-trained models are publicly available at https://aka.ms/xdoc. + +# 2 XDoc + +In this section, we first introduce the architecture of XDoc and details of the embedding used for each document format, then introduce the objectives for pre-training the XDoc model. + +# 2.1 Model Architecture + +As is demonstrated in Figure 3, XDoc is capable of tackling texts in various formats (plain, document, and web texts) in one model. For any input sequences, XDoc learns to embed them using a shared backbone and additional embedding layers when other prior knowledge is available. In detail, for any input text $T$ , XDoc first tokenizes it into subwords $\mathbf{s} = s_{1:L}$ using WordPiece, where + +![](images/9ff8f5512148e7486ffafa08942b45b0ea8c47ce260245065edb5b6906989ad8.jpg) +Figure 3: XDoc tackles multiple formats in one model while sharing most parameters including 1D position embedding, word embedding, and dense Transformer layers. An optional embedding layer and adaptive layer are utilized for specific prior knowledge such as 2D position for document texts and XPaths for web texts (no additional prior for plain texts). We demonstrate the dataflow for document texts and use dash lines for other formats. + +$L$ denotes the maximum length. Subsequently, for each subword $s_i$ with index $i$ , it is first fed to a word embedding layer and we denote the output as $\mathrm{WordEmb}(s_i)$ . Then it is added with a learnable 1D position embedding $\mathrm{1DEmb}(i)$ . Since the word embedding and 1D position embedding layers are indispensable for Transformer-based models, we attempt to share the parameters across different formats. Based on this, we will detail the overall embedding for each document format in the next. + +Overall embedding for plain texts As there is no additional prior knowledge for plain texts, we simply add up the word embedding and 1D position embedding to construct the input for Transformer layers following (Devlin et al., 2018; Liu et al., 2019). For each word $s_i^P$ , where $i$ is the index and “P” denotes “Plain”, the overall embedding $\mathrm{Emb}(s_i^P)$ can be calculated as follows: + +$$ +\operatorname {E m b} \left(s _ {i} ^ {P}\right) = \operatorname {W o r d E m b} \left(s _ {i} ^ {P}\right) + 1 \operatorname {D E m b} (i) \quad (1) +$$ + +Overall embedding for document texts Different from the plain texts, the visually rich document texts are usually organized with 2-D layouts, where the coordinates of each text box play crucial roles in understanding. Hence, the 2D position should be necessarily taken into account during pretraining. Concretely, for a given subword $s_i^D$ ("D" is the abbreviation of "Document"), we denote the 2D position as $\text{box}_i^D = (l_i, r_i, t_i, b_i, w_i, h_i)$ , where $l, r, t, b, w, h$ denote left, right, top, and bottom coordinates, width and height of the text box, re + +spectively. For example, as illustrated in Figure 2(b), $l, r, t, b, w, h$ of the text "PERSONAL" is set to 240, 275, 80, 100, 35, and 20, respectively. Considering that most parameters are shared across different formats, we introduce an adaptive layer to enhance the distinction of specific prior information. The adaptive layer is simply implemented with a lightweight Linear-ReLU-Linear sequence and we will discuss the effectiveness in Section 3.4. Following (Xu et al., 2020, 2021a), we add up all the embedding to construct the overall embedding $\mathrm{Emb}(s_i^D)$ as follows: + +$$ +\begin{array}{l} \operatorname {E m b} \left(s _ {i} ^ {D}\right) = \operatorname {W o r d E m b} \left(s _ {i} ^ {D}\right) + 1 \operatorname {D E m b} (i) \tag {2} \\ + \operatorname {D o c A d a p t i v e} \left[ 2 \mathrm {D E m b} \left(b o x _ {i} ^ {D}\right) \right] \\ \end{array} +$$ + +$$ +\begin{array}{l} 2 \mathrm {D E m b} \left(b o x _ {i} ^ {D}\right) = \operatorname {L e f t E m b} \left(l _ {i}\right) + \operatorname {R i g h t E m b} \left(r _ {i}\right) \\ + \operatorname {T o p E m b} (t _ {i}) + \operatorname {B o t t o m E m b} (b _ {i}) \\ + \operatorname {W i d t h E m b} \left(w _ {i}\right) + \operatorname {H e i g h t E m b} \left(h _ {i}\right) \tag {3} \\ \end{array} +$$ + +where "LeftEmb" denotes the embedding layer of the left coordinates (other embedding layers follow the same naming conventions). Please note that the adaptive layer is not shared across different formats and "DocAdaptive" is specifically used for document texts. + +Overall embedding for web texts Since the 2-D layout of each website is not fixed and it highly depends on the resolution of rendering devices, we only employ XPath as the prior knowledge following (Li et al., 2022a). Concretely, for each subword + +$s_i^W$ ("W" is the abbreviation of "Web"), its XPath $xpath_{i}^{W}$ can be represented with a tag sequence and a subscript sequence. Taking the text "Acura" in Figure 2(c) as an instance, its original XPath expression is /html/body/div/a/div/div span[2]. Following MarkupLM (Li et al., 2022a), we construct the tag sequence as [html, body, div, a, div, div, span], representing the tag order from the root to the current node. In addition, the subscript sequence is set to $[0, 0, 0, 0, 0, 0, 2]$ , where each subscript denotes the index of a node when multiple nodes have the same tag name under a parent node (More explanations are shown in Appendix A). We add the tag embedding and subscript embedding to get the XPath embedding $\mathrm{XPathEmb}(xpath_{i}^{W})$ . The overall embedding can be calculated as: + +$$ +\begin{array}{l} \operatorname {E m b} \left(s _ {i} ^ {W}\right) = \operatorname {W o r d E m b} \left(s _ {i} ^ {W}\right) + 1 \operatorname {D E m b} (i) \tag {4} \\ + \operatorname {W e b A d a p t i v e} \left[ \operatorname {X P a t h E m b} \left(x p a t h _ {i} ^ {W}\right) \right] \\ \end{array} +$$ + +Similarly, we leverage an adaptive layer "WebAdaptive" for better pre-training. Further, the overall embedding is fed to shared Transformer layers to obtain the contextual representations. + +# 2.2 Pre-training Objectives + +We employ masked language modeling (MLM) as the pre-training task following (Devlin et al., 2018; Liu et al., 2019; Xu et al., 2020). More specifically, we randomly mask $15\%$ of the input tokens, where $80\%$ tokens are converted to a special [MASK] token, $10\%$ tokens are randomly replaced with other tokens, and $10\%$ tokens remain unchanged. Through pre-training, the model needs to maximize the probability of the masked tokens according to the contextual representations. + +# 3 Experiments + +In this section, we first introduce the model configuration and detail the hyperparameters in XDoc, then introduce the pre-training strategies of XDoc. Next, we demonstrate the experimental results on a wide range of downstream tasks. At last, we verify the effectiveness of some designs in XDoc and have a discussion. + +# 3.1 Model Configurations + +The proposed XDoc is initialized with RoBERTaBASE, containing 12 Transformer layers, 768 hidden units, and 12 attention heads. The maximum length of each input sequence is + +set to 512 with a [CLS] token and a [SEP] token padding at the beginning and the end, respectively. The input sequence whose length exceeds 512 will be truncated, while the sequence shorter than 512 will be padded with [PAD] tokens. + +# 3.2 Pre-training XDoc + +Large quantities of corpus play an essential role in learning better representations during pre-training (Liu et al., 2019). Specifically, we utilize three categories of datasets for pre-training, which are detailed as follows. + +Pre-training data for plain texts. We follow (Liu et al., 2019) to leverage five English-language corpora for pre-training, including BOOKCORPUS (Zhu et al., 2015), English WIKIPEDIA1, CC-NEWS (Nagel, 2016), OPENWEBTEXT (Aaron Gokaslan, 2019), and STORIES (Trinh and Le, 2018), totaling 213,713 files for pre-training. + +Pre-training data for document texts. We leverage the large-scale scanned document image data IIT-CDIP Test Collection 1.0 (Lewis et al., 2006) following (Xu et al., 2020, 2021a; Huang et al., 2022). This dataset contains 42 million document pages, each of which is processed by OCR tools Tesseract² to yield the text contents and locations. For a fair comparison with previous works, we only use 11 million of them for pre-training. Please note that we follow LayoutLMv3 (Huang et al., 2022) to utilize the segment-level layout positions, where words in a segment share the same 2D-position. + +Pre-training data for web texts. Following MarkupLM (Li et al., 2022a), we take advantage of the large-scale dataset Common Crawl3, which contains petabytes of web pages in raw formats. Specifically, text contents and HTML tags are both available for each web page. According to (Li et al., 2022a), the authors first filtered Common Crawl with fastText (Bojanowski et al., 2017) to remove non-English pages, then only kept common tags for saving disk storage, resulting in 24 million web pages for pre-training. + +Specifically, we do not use any data augmentation or ensemble strategies for pre-training. We leverage AdamW optimizer (Loshchilov and Hutter, 2019) with learning rate 5e-5 and epsilon 1e-8. Moreover, we linearly warm up in the first $5\%$ steps. + +
#ModelPre-trainDownstream Tasks
PDWMNLI-mQNLISST-2MRPCSQuAD1.1 / 2.0FUNSDDocVQAWebSRC
1RoBERTa87.692.894.890.292.2* / 83.4*---
2LayoutLM-----79.369.2-
3MarkupLM-------74.5
4XDoc100K87.093.095.290.191.9 / 83.470.164.558.5
5XDoc100K86.791.394.589.991.4 / 82.987.369.458.6
6XDoc100K86.592.094.690.191.4 / 83.171.663.664.8
7XDoc100K87.292.794.990.291.9 / 83.585.769.157.5
8XDoc100K86.491.695.391.091.7 / 83.585.769.565.0
9XDoc100K86.892.395.190.691.6 / 83.070.064.764.8
10XDoc100K86.292.895.291.391.7 / 83.086.468.367.0
11XDoc500K86.692.295.289.991.7 / 83.189.172.673.3
12XDoc1M86.892.395.391.192.0 / 83.589.472.774.8
+ +Table 1: Results on downstream tasks for various document formats. P, D, and W denote whether XDoc is pre-trained with plain, document, and web texts, respectively. Compared with methods designed for a specific format (#1~#3), XDoc achieves comparable or even better performance. Accuracy is used for MNLI-m, QNLI, and SST-2 for evaluation. F1 score is used for MRPC, SQuAD, FUNSD, and WebSRC. ANLS is used for DocVQA. Digits marked with * denote that we re-implement the results since the original paper did not report them. + +Experiments are conducted with 32 NVIDIA Tesla V100 GPUs with 32GB memory. For those experiments pre-trained for 100K steps, we set the batch size to 128, while using all plain text datasets, the subset of document text (1 million), and web text (1 million) datasets for pre-training. Besides, we set the batch size to 512 and leverage all datasets for experiments pre-trained for 500K and 1M steps. FP16 is used during pre-training for accelerating and saving GPU memory. Within each batch, we equally sample documents in different formats for pre-training (see more discussions in Appendix B). + +# 3.3 Fine-tuning on Downstream Tasks + +In this subsection, we utilize a wide range of downstream datasets to validate the ability of pre-trained XDoc in different formats. Specifically, for the plain texts, we leverage the widely-used GLUE benchmark (Wang et al., 2019) and SQuAD (Rajpurkar et al., 2016, 2018). For document texts, we use the form understanding dataset FUNSD (Jaume et al., 2019) and question-answering dataset DocVQA (Mathew et al., 2021). For web texts, we utilize the question-answering dataset WebSRC (Chen et al., 2021). In the following, we will first introduce the downstream datasets in each format, then demonstrate the experimental results in detail. + +# 3.3.1 Fine-tuning on Tasks for Plain texts + +Fine-tuning on GLUE benchmark We evaluate XDoc on the General Language Understanding Evaluation (GLUE) benchmark (Wang et al., 2019), + +which contains 9 datasets in total for evaluating natural language understanding systems. Specifically, 4 datasets four of them, including MNLI-m, QNLI, SST-2, and MRPC, are used for evaluation. We fine-tune XDoc for 10 epochs with a learning rate of 2e-5 and a batch size 16. The linear warmup is used for the first 100 steps. We utilize accuracy as the evaluation metric for MNLI-m, QNLI, SST-2, and F1 score for MRPC. + +The experimental results are shown in Table 1 and we leverage RoBERTaBASE (Liu et al., 2019) as the baseline (#1). According to #4, we notice that after pre-training with plain texts, the performance of XDoc is almost consistent with the baseline. It is intuitive since XDoc is initialized with RoBERTaBASE and the continued training will not affect the performance. Interestingly, we notice that if XDoc is pre-trained without plain texts (refer to #5, #6, and #8), the performance is still on par with the baseline, indicating that the knowledge of plain texts will not be easily forgotten when XDoc is pre-trained using other formats. + +Fine-tuning on SQUAD V1.1 and V2.0 We further employ the Stanford Question Answering Dataset (SQuAD) (Rajpurkar et al., 2016, 2018) for evaluation. SQuAD contains two versions: SQuAD V1.1 and SQuAD V2.0. For V1.1, given a question, the answer can always be retrieved in the paragraph. By contrast, for V2.0, there are some questions that can not be answered, which is more challenging compared with V1.1. Specifically, XDoc is fine + +tuned with 2 epochs for V1.1 and 4 epochs for V2.0. We set the batch size to 16 and the learning rate to 3e-5. We use the F1 score as the evaluation metric. + +We also utilize RoBERTaBASE (Liu et al., 2019) as the baseline (#1). As is demonstrated in Table 1, we notice that the performance does not fluctuate much under various pre-training settings (#4~#12). Similar to the experiment results on the GLUE benchmark, XDoc is capable of achieving comparable performance when pre-trained in all formats (refer to #10~#12). + +# 3.3.2 Fine-tuning on Task for Document texts + +Fine-tuning on FUNSD We utilize the receipt understanding dataset FUNSD (Jaume et al., 2019) to verify the ability of XDoc. Deriving from the RVL-CDIP dataset (Harley et al., 2015), FUNSD contains 199 noisy scanned documents (149 samples for training and 50 for test) with 9,709 semantic entities and 31,485 words. Specifically, we focus on the entity labeling task, i.e. labeling "question", "answer", "header", or "other" in the given receipt. Concretely, we fine-tune XDoc for 1000 steps with the a batch size 64 and a learning rate 5e-5. We utilize linear warmup for the first 100 steps. The coordinates are normalized by the size of images following (Xu et al., 2020). F1 score is adopted as the evaluation metric. + +For a fair comparison, we choose LayoutLMBASE (#2) (Xu et al., 2020) as the baseline, which exploits the layout and text knowledge for tackling visually rich document understanding. Through the experimental results, we observe that XDoc can outperform the baseline by a large margin if document texts are used during pre-training. According to #10, the performance can be boosted by $7.1\%$ if all formats are used for pre-training. Besides, we notice that the performance can be boosted further when XDoc is trained for more steps (further increase by $3.0\%$ according to #12). In contrast, it is observed that the performance will heavily deteriorate if the document texts are absent during pre-training (decrease by $9.3\%$ according to #9). + +Fine-tuning on DocVQA For further validating the ability of XDoc on document texts, we utilize the document question-answering dataset DocVQA (Mathew et al., 2021), which contains 10,194/1,286/1,287 images with 39,463/5,349/5,188 questions for training/validation/test sets, respectively. We follow + +LayoutLMv2 (Xu et al., 2021a) to employ Microsoft Read API to produce OCR results and find the given answers heuristically. We evaluate XDoc on the evaluation set and the final scores are obtained by submitting the results to the official website4. We fine-tune XDoc for 10 epochs with a batch size 16 and a learning rate 2e-5. The linear warmup strategy is used for the first $10\%$ steps. Following (Xu et al., 2020), we normalize the coordinates by the size of images. We use Average Normalized Levenshtein Similarity (ANLS) as the evaluation metric. + +As LayoutLMBASE (Xu et al., 2020) did not report the results on DocVQA, we borrow the ANLS score from LayoutLMv2 (Xu et al., 2021a). Similar to the experimental results on FUNSD, we observe that the performance of XDoc highly depends on the participation of document texts during pre-training. For example, if XDoc is pre-trained without document texts, the performance drops by $4.7\%$ , $5.6\%$ , and $4.5\%$ according to #4, #6, and #9. When pre-training with 100K steps using all formats, XDoc obtains comparable performance (refer to #10). Furthermore, XDoc outperforms the baseline when training with more training steps (refer to #11 and #12). + +# 3.3.3 Fine-tuning on Task for Web Texts + +Fine-tuning on WebSRC We employ the Web-based Structural Reading Comprehension dataset (WebSRC) (Chen et al., 2021) to verify the ability of XDoc on web texts. It contains 440K question-answer pairs collected from 6.5K web pages. The HTML source code, screenshots, and metadata are available in this dataset. The training/validation/test parts consist of 307,315/52,826/40,357 question-answer pairs. The answer is either a text span in the given web page or yes/no. We fine-tune XDoc for 5 epochs with a batch size 16, a learning rate 5e-5, and a linear warmup rate 0.1. F1 score is used as the metric. + +We use MarkupLMBASE (Li et al., 2022a) as the baseline (#3). When XDoc is only pre-trained for 100K steps, we notice that the performance is subpar compared with the baseline. It is intuitive since MarkupLM is pre-trained with three pretext tasks, including masked language modeling, node relation prediction, and title-page matching. Interestingly, we observe that when training for more steps (#12), the performance of XDoc surpasses + +
InitMNLI-mFUNSDWebSRCAvg
Scratch75.478.829.261.1
RoBERTa86.286.457.576.7
+ +Table 2: Results on the initialization of XDoc. + +
LayersMNLI-mFUNSDWebSRCAvg
086.485.054.775.4
186.286.457.576.7
286.784.855.075.5
386.486.155.776.1
\( 1^{\dagger} \)86.484.857.376.2
+ +Table 3: Results on the symmetry and number of adaptive layers. $\dagger$ means that the document and web branches share the same adaptive layers. + +the baseline. Similarly, it is observed that the performance will drop heavily if web texts are absent during pre-training (refer to #4, #5, and #7). + +# 3.4 Discussions + +In this subsection, we conduct experiments to validate the effectiveness of the components or training strategies in XDoc. Unless specified otherwise, all experiments are pre-trained with 3M data (1M for each format) for 100K steps. Moreover, we discuss the parameter and time efficiency. + +The initialization of XDoc We try to randomly initialize the parameters of XDoc with normal distribution and the results are demonstrated in Table 2. We observe that XDoc trained from scratch performs worse on downstream tasks, e.g. the performance drops by $10.8\%$ for MNLI-m, $7.6\%$ for FUNSD, and $28.3\%$ for WebSRC. Therefore, we choose to initialize XDoc with RoBERTaBASE for better pre-training. + +The symmetry and number of adaptive layers We utilize adaptive layers, which are implemented by a sequence of Linear and ReLU layers, to enhance the representations of different parts such as the 2D position and XPath embedding. Here we attempt to explore the symmetry and the number of adaptive layers. In detail, "symmetry" means the document and web branches share the same adaptive layers. Additionally, we denote the number of layers as the number of ReLU layers (e.g. Layers=2 means Linear-ReLU-Linear-ReLU-Linear and Layers=0 means no adaptive layers are used). As is demonstrated in Table 3, we notice that the average + +performance reaches the best if only one adaptive layer is used. Moreover, if we apply different adaptive layers to the document and web branches, the average performance can be boosted by $0.5\%$ compared with the counterpart $(76.2\%)$ . + +Parameter efficiency We demonstrate some analysis of parameters in Table 4. We observe that the word embedding and Transformer layers contain most of the parameters (124M), e.g. occupy $96.9\%$ , $94.7\%$ , and $89.2\%$ of all the parameters for RoBERTaBASE, LayoutLMBASE, and MarkupLMBASE, respectively. By sharing the word embedding, 1D position embedding, and Transformer layers across multiple text formats, the proposed XDoc is efficient in terms of parameter usage. In detail, the total amount of parameters is 398M for three single models, while XDoc only contains 146M parameters (146M/398M≈36.7%) but can be used for downstream tasks in multiple formats. Besides, the newly introduced adaptive layers only contain 4M parameters, which is almost negligible for the whole model (2.7%). + +Time efficiency Apart from the newly introduced adaptive layer, the architecture of XDoc is similar to those models targeting one specific document format. Since the adaptive layer is lightweight, it will not take much time overhead. For example, when conducting inference on the DocVQA dataset, it costs $45\mathrm{ms}$ for a batch while the adaptive layer only consumes negligible $0.8\mathrm{ms}$ $(1.8\%)$ . Hence, XDoc is efficient in terms of the time cost. + +# 4 Related Work + +In this section, we review the pre-trained methods for document understanding, ranging from plain, document, and web texts, respectively. + +Pre-trained methods for plain texts The understanding of plain texts through pre-training has been extensively studied during the last decade (Devlin et al., 2018; Yang et al., 2019; Bao et al., 2020; Liu et al., 2019; Lewis et al., 2020; Lan et al., 2019; Jiang et al., 2020; He et al., 2021; Dong et al., 2019; Lample and Conneau, 2019; Lin et al., 2021). For example, GPT (Radford et al., 2019; Brown et al., 2020) utilizes Transformer (Vaswani et al., 2017) to conduct single-director masked-word prediction in an unsupervised manner. Besides, BERT (Devlin et al., 2018) utilizes two self-supervised tasks, including mask language modeling and next sentence prediction to obtain the robust representations of + +
MethodsWord 39M1D Position 4MTransformer 85M2D Position 3MXPath 11MAdaptive 4MTotal
RoBERTa---128M
LayoutLM--131M
MarkupLM--139M
XDoc146M
+ +Table 4: Analysis of the parameter efficiency. XDoc shares most parameters across different formats, including word embedding, 1D position embedding, and Transformer layers. We omit some layers that contain negligible parameters such as segment embedding layers and LayerNorm layers. All the comparison models are in base size. + +words based on Transformer. SpanBERT (Joshi et al., 2020) and ERNIE (Zhang et al., 2019) try to mask consecutive text spans so as to construct a more challenging pre-train task. In (Dong et al., 2019), the authors used different kinds of attention masks to enable one-direction and bi-direction attending. XLNet (Yang et al., 2019) introduces generalized autoregressive pre-training framework that utilizes a permutation language modeling objective. ELECTRA (Clark et al., 2020) first samples some candidates for the masked words and then uses a discriminator to predict whether a given token is replaced. + +Pre-trained methods for document texts Benefiting from the public large-scale document dataset (Lewis et al., 2006), pre-training has become the de facto standard for analyzing document texts (Zhang et al., 2020; Wang et al., 2021; Li et al., 2021b; Xu et al., 2021b; Li et al., 2022b; Appalaraju et al., 2021; Garncarek et al., 2021; Gu et al., 2022b,a; Wu et al., 2021; Wang et al., 2022a). LayoutLM (Xu et al., 2020) makes the first attempt to combine the Layout knowledge during pre-training to obtain robust contextual features for document texts. Based on LayoutLM, LayoutXLM (Xu et al., 2021b) utilizes multilingual document text datasets for pre-training. StructuralLM (Li et al., 2021a) jointly utilizes cell and layout information from scanned documents to make the representations more robust. LayoutLMv2 (Xu et al., 2021a) introduces a multi-modal architecture by combining additional image tokens in the Transformer. BROS (Hong et al., 2022) utilizes the token-masking and area-masking strategies for tackling information extraction tasks. XYLayoutLM (Gu et al., 2022b) proposes an Augmented XY-Cut algorithm to exploit proper reading orders during pre-training. Recently, LayoutLMv3 (Huang et al., 2022) pre-trains the text branch and image branch simultaneously + +using Mask Language Modeling and Mask Image Modeling tasks, which makes it a robust model for tackling text-centric and image-centric tasks. + +Pre-trained methods for web texts Compared with plain and document text analysis, the understanding of web texts is less studied and is more challenging since the layout of each website is not fixed (i.e. depending on the resolution of devices). MarkupLM (Li et al., 2022a) takes the first attempt to incorporate web-based knowledge during pretraining while utilizing three pretext tasks, including masked language modeling, node relation prediction, and title-page matching. Further, based on MarkupLM, DoM-LM (Deng et al., 2022) introduces a new pre-training task predicting masked HTML node. WebFormer (Wang et al., 2022b) simultaneously feeds text features and image features to the multi-modal Transformer while constructing rich attention patterns between these tokens. + +Generally, although the mentioned methods show impressive performance in one specific format, they can not be transferred to tackle other formats. To mitigate this problem, the proposed XDoc is a scalable and flexible framework that is friendly to a wide range of formats, thus bringing much convenience for people. + +# 5 Conclusion and Future Work + +In this paper, we propose XDoc, a unified framework that can tackle multiple document formats (e.g. plain, document, and web texts) in one model. For parameter efficiency, XDoc shares most parameters, including the word embedding, 1D position embedding, and Transformer layers, across different document formats. The experimental results show that with only $36.7\%$ parameters, XDoc can achieve comparable or even better performance on downstream tasks spanning various document formats. For future work, we will consider exploit + +ing the image features during pre-training to tackle image-centric tasks and designing more unified pretraining tasks for various document formats. + +# Limitations + +As XDoc only leverages the text and layout information for pre-training, it is not suitable to tackle some image-centric tasks such as page object detection. For example, we can append some image tokens in Transformers (for plain text, we can simply use [PAD] tokens since there are no image features) and conduct cross-attention with text tokens. Besides, XDoc only uses masked language modeling as the only pre-training task in this version. For future work, we will consider designing more unified pre-training tasks for various document formats. + +# References + +Vanya Cohen Aaron Gokaslan. 2019. Openweb-text corpus. http://web.archive.org/ save/http://Skylion007.github.io/ OpenWebTextCorpus. +Srikar Appalaraju, Bhavan Jasani, Bhargava Urala Kota, Yusheng Xie, and R Manmatha. 2021. Docformer: End-to-end transformer for document understanding. In ICCV. +Hangbo Bao, Li Dong, Furu Wei, Wenhui Wang, Nan Yang, Xiaodong Liu, Yu Wang, Jianfeng Gao, Songhao Piao, Ming Zhou, et al. 2020. Unilmv2: Pseudomasked language models for unified language model pre-training. In ICML. +Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the association for computational linguistics. +Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. In NeurIPS. +Lu Chen, Xingyu Chen, Zihan Zhao, Danyang Zhang, Jiabao Ji, Ao Luo, Yuxuan Xiong, and Kai Yu. 2021. Websrc: A dataset for web-based structural reading comprehension. In EMNLP. +Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. 2020. Electra: Pre-training text encoders as discriminators rather than generators. In ICLR. +Lei Cui, Yiheng Xu, Tengchao Lv, and Furu Wei. 2021. Document ai: Benchmarks, models and applications. In CCL. + +Xiang Deng, Prashant Shiralkar, Colin Lockard, Binxuan Huang, and Huan Sun. 2022. Dom-lm: Learning generalizable representations for html documents. arXiv preprint arXiv:2201.10608. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. In *NAACL*. +Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understanding and generation. In NeurIPS. +Łukasz Garncarek, Rafał Powalski, Tomasz Stanisławek, Bartosz Topolski, Piotr Halama, Michal Turski, and Filip Graliński. 2021. Lambert: Layout-aware language modeling for information extraction. In ICDAR. +Jiuxiang Gu, Jason Kuen, Vlad I Morariu, Handong Zhao, Nikolaos Barmpalios, Rajiv Jain, Ani Nenkova, and Tong Sun. 2022a. Unified pretraining framework for document understanding. In NeurIPS. +Zhangxuan Gu, Changhua Meng, Ke Wang, Jun Lan, Weiqiang Wang, Ming Gu, and Liqing Zhang. 2022b. Xlayoutlm: Towards layout-aware multimodal networks for visually-rich document understanding. In CVPR. +Adam W Harley, Alex Ufkes, and Konstantinos G Derpanis. 2015. Evaluation of deep convolutional nets for document image classification and retrieval. In ICDAR. +Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2021. Deberta: Decoding-enhanced bert with disentangled attention. In ICLR. +Teakgyu Hong, Donghyun Kim, Mingi Ji, Wonseok Hwang, Daehyun Nam, and Sungrae Park. 2022. Bros: A pre-trained language model focusing on text and layout for better key information extraction from documents. In AAAI. +Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, and Furu Wei. 2022. Layoutlmv3: Pre-training for document ai with unified text and image masking. In ACM MM. +Guillaume Jaume, Hazim Kemal Ekenel, and Jean-Philippe Thiran. 2019. Funsd: A dataset for form understanding in noisy scanned documents. In IC-DAR Workshop. +Zi-Hang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, and Shuicheng Yan. 2020. Convbert: Improving bert with span-based dynamic convolution. In NeurIPS. +Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S Weld, Luke Zettlemoyer, and Omer Levy. 2020. Spanbert: + +Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics. +Guillaume Lample and Alexis Conneau. 2019. Crosslingual language model pretraining. arXiv preprint arXiv:1901.07291. +Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942. +David Lewis, Gady Agam, Shlomo Argamon, Ophir Frieder, David Grossman, and Jefferson Heard. 2006. Building a test collection for complex document information processing. In SIGIR. +Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2020. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In ACL. +Chenliang Li, Bin Bi, Ming Yan, Wei Wang, Songfang Huang, Fei Huang, and Luo Si. 2021a. Structurallm: Structural pre-training for form understanding. In ACL. +Junlong Li, Yiheng Xu, Lei Cui, and Furu Wei. 2022a. Markuplm: Pre-training of text and markup language for visually-rich document understanding. In ACM MM. +Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, and Furu Wei. 2022b. Dit: Self-supervised pre-training for document image transformer. In IC-DAR. +Minghao Li, Tengchao Lv, Jingye Chen, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, and Furu Wei. 2021b. Trocr: Transformer-based optical character recognition with pre-trained models. arXiv preprint arXiv:2109.10282. +Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, et al. 2021. Few-shot learning with multilingual language models. arXiv preprint arXiv:2112.10668. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. +Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In ICLR. +Minesh Mathew, Dimosthenis Karatzas, and CV Jawahar. 2021. Docvqa: A dataset for vqa on document images. In WACV. + +Sebastian Nagel. 2016. Cc-news. http://web.archive.org save/http://commoncrawl.org/2016/10/news-dataset-available. +Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog. +Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswerable questions for squad. In ACL. +Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In EMNLP. +Trieu H Trinh and Quoc V Le. 2018. A simple method for commonsense reasoning. arXiv preprint arXiv:1806.02847. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS. +Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2019. Glue: A multi-task benchmark and analysis platform for natural language understanding. In ICLR. +Jiapeng Wang, Lianwen Jin, and Kai Ding. 2022a. Lilt: A simple yet effective language-independent layout transformer for structured document understanding. In ACL. +Qifan Wang, Yi Fang, Anirudh Ravula, Fuli Feng, Xiaojun Quan, and Dongfang Liu. 2022b. Webformer: The web-page transformer for structure information extraction. In WWW. +Zilong Wang, Yiheng Xu, Lei Cui, Jingbo Shang, and Furu Wei. 2021. Layoutreader: Pre-training of text and layout for reading order detection. In EMNLP. +Te-Lin Wu, Cheng Li, Mingyang Zhang, Tao Chen, Spurthi Amba Hombaiah, and Michael Bendersky. 2021. Lampret: Layout-aware multimodal pretraining for document understanding. arXiv preprint arXiv:2104.08405. +Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, et al. 2021a. Layoutlmv2: Multi-modal pre-training for visually-rich document understanding. In ACL. +Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, and Ming Zhou. 2020. Layoutm: Pre-training of text and layout for document image understanding. In KDD. +Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, and Furu Wei. 2021b. Layoutxlm: Multimodal pre-training for multilingual visually-rich document understanding. arXiv preprint arXiv:2104.08836. + +Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, and Furu Wei. 2022. Xfund: A benchmark dataset for multilingual visually rich form understanding. In ACL. + +Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In NeurIPS. + +Peng Zhang, Yunlu Xu, Zhanzhan Cheng, Shiliang Pu, Jing Lu, Liang Qiao, Yi Niu, and Fei Wu. 2020. Trie: end-to-end text reading and information extraction for document understanding. In ACM MM. + +Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. 2019. Ernie: Enhanced language representation with informative entities. In ACL. + +Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In ICCV. + +# A Details of XPath embedding + +As is illustrated in Figure 4, each web page can be represented as a DOM (Document Object Model) tree based on the corresponding HTML source code. In addition, XPath is a query language for selecting nodes based on the DOM tree. For example, the XPath of the text "Tom" can be represented as "/html/body/div span[2]", where the texts denote the order of tag name traversed from the root node and the subscripts stand for the index of a node when more than one nodes have the same tag name under a parent node. For those tags without subscripts, we simply set the subscripts to 0. Following MarkupLM, we filter some unimportant tags and only reserve some common tags such as , ,
, ,
  • , , etc. + +To construct the XPath embedding for a given subword $s_i^W$ , we first denote its XPath as $xpath_i^W = [(tag_1,sub_1),(tag_2,sub_2),\dots,(tag_D,sub_D)]$ where $D$ means the maximum depth of the sequence, while $tag_j$ and $sub_j$ denotes the tag name and subscript at the $j$ -th depth, respectively. For example, we represent the XPath of the text "Tom" as $[(html,0),(body,0),(div,0),(span,2)]$ . Subsequently, for each pair $(tag_j,sub_j)$ at depth $j$ , we calculate its embedding $ts_j$ by adding up the tag embedding and subscript embedding: + +$$ +t s _ {j} = \operatorname {T a g E m b} _ {\mathrm {j}} (t a g _ {j}) + \operatorname {S u b E m b} _ {\mathrm {j}} (s u b _ {j}) \tag {5} +$$ + +Please note that the embedding layer of tags and subscripts vary across different depths. Finally, we concatenate the embedding of all pairs to construct the XPath embedding: + +$$ +\operatorname {X P a t h E m b} \left(x p a t h _ {i} ^ {W}\right) = \left[ t s _ {1}; t s _ {2}; \dots ; t s _ {D} \right] \tag {6} +$$ + +![](images/d086d71ab04869705670b9f96b872b882035fdf2926f256463661afe389d3254.jpg) +(a) HTML source code +(b)DOM tree and XPath +Figure 4: Illustrations of the way to construct XPath based on the corresponding HTML source code. Some examples of XPath are indicated using red arrows. + +# B Balance of Pre-training Data + +We attempt to use different sampling ratios for different formats during pre-training and the experimental results are shown in Table 5. For example, "3:1:1" denotes that there are approximately $60\%$ plain texts, $20\%$ document texts, and $20\%$ web texts in a batch. We notice that the average performance reaches the best $(76.7\%)$ if we use the balanced sampling strategy. Interestingly, we observe that the sampling ratio with respect to one specific format does not positively correlate with the performance. For instance, when "P:D:W" is set to 1:1:3, the performance on WebSRC is the worst $(55.4\%)$ among all experiments. + +
    P:D:WMNLI-mFUNSDWebSRCAvg
    1:1:186.286.457.576.7
    3:1:186.783.856.775.7
    1:3:186.784.856.676.0
    1:1:387.183.755.475.4
    + +Table 5: Results on the balance of pre-training datasets. P:D:W denotes the ratio of plain, document, and web texts in a batch, respectively. \ No newline at end of file diff --git a/xdocunifiedpretrainingforcrossformatdocumentunderstanding/images.zip b/xdocunifiedpretrainingforcrossformatdocumentunderstanding/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..eb751ca53e3bc71c3269e28561f5835f176e1760 --- /dev/null +++ b/xdocunifiedpretrainingforcrossformatdocumentunderstanding/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9fb9d2d971e0048f9b166d6c064e78e454c247f2a83113a34e70b98335939407 +size 411102 diff --git a/xdocunifiedpretrainingforcrossformatdocumentunderstanding/layout.json b/xdocunifiedpretrainingforcrossformatdocumentunderstanding/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..8de9680c73a19d3fbe2960cc3bb300ae8fc0dc57 --- /dev/null +++ b/xdocunifiedpretrainingforcrossformatdocumentunderstanding/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8fd13d581522a4ac52b0ef92c090f10e057359c54a8ecd7387cbf9ddfd98424d +size 344883 diff --git a/xriclcrosslingualretrievalaugmentedincontextlearningforcrosslingualtexttosqlsemanticparsing/ee27e2ae-2f6c-4ec3-9a38-fe174e13c7e1_content_list.json b/xriclcrosslingualretrievalaugmentedincontextlearningforcrosslingualtexttosqlsemanticparsing/ee27e2ae-2f6c-4ec3-9a38-fe174e13c7e1_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..b0a9a6e037015435542e4bb42082af4b36449990 --- /dev/null +++ b/xriclcrosslingualretrievalaugmentedincontextlearningforcrosslingualtexttosqlsemanticparsing/ee27e2ae-2f6c-4ec3-9a38-fe174e13c7e1_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:96f27538b43647b3bca4965291247b39fc3f20b03680191a64f4191edc7df463 +size 79580 diff --git a/xriclcrosslingualretrievalaugmentedincontextlearningforcrosslingualtexttosqlsemanticparsing/ee27e2ae-2f6c-4ec3-9a38-fe174e13c7e1_model.json b/xriclcrosslingualretrievalaugmentedincontextlearningforcrosslingualtexttosqlsemanticparsing/ee27e2ae-2f6c-4ec3-9a38-fe174e13c7e1_model.json new file mode 100644 index 0000000000000000000000000000000000000000..8356b7d65d1b134b3bad61e17656320d16878c71 --- /dev/null +++ b/xriclcrosslingualretrievalaugmentedincontextlearningforcrosslingualtexttosqlsemanticparsing/ee27e2ae-2f6c-4ec3-9a38-fe174e13c7e1_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2b07fdc5d1c638c87aa7e96c1547f9dda3ecdadaf94d2b0288c00ad475dad83d +size 98616 diff --git a/xriclcrosslingualretrievalaugmentedincontextlearningforcrosslingualtexttosqlsemanticparsing/ee27e2ae-2f6c-4ec3-9a38-fe174e13c7e1_origin.pdf b/xriclcrosslingualretrievalaugmentedincontextlearningforcrosslingualtexttosqlsemanticparsing/ee27e2ae-2f6c-4ec3-9a38-fe174e13c7e1_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..062ad1dc2e896dd4cd33ac01e5f7e7470e2625ae --- /dev/null +++ b/xriclcrosslingualretrievalaugmentedincontextlearningforcrosslingualtexttosqlsemanticparsing/ee27e2ae-2f6c-4ec3-9a38-fe174e13c7e1_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bb5960842cf38ea24af152c17d4ab80cffae294304298ca391ce570d76432ad7 +size 389167 diff --git a/xriclcrosslingualretrievalaugmentedincontextlearningforcrosslingualtexttosqlsemanticparsing/full.md b/xriclcrosslingualretrievalaugmentedincontextlearningforcrosslingualtexttosqlsemanticparsing/full.md new file mode 100644 index 0000000000000000000000000000000000000000..d95b951e5f98b8d5bcb6afb0516ef0b2209936a9 --- /dev/null +++ b/xriclcrosslingualretrievalaugmentedincontextlearningforcrosslingualtexttosqlsemanticparsing/full.md @@ -0,0 +1,318 @@ +# XRICL: Cross-lingual Retrieval-Augmented In-Context Learning for Cross-lingual Text-to-SQL Semantic Parsing + +Peng Shi\*, Rui Zhang\*, He Bai\*, and Jimmy Lin\* + +$\spadesuit$ University of Waterloo Penn State University + +{peng.shi,he.bai,jimmylin}@uwaterloo.ca, rmz5227@psu.edu + +# Abstract + +In-context learning using large language models has recently shown surprising results for semantic parsing tasks such as Text-to-SQL translation. Prompting GPT-3 or Codex using several examples of question-SQL pairs can produce excellent results, comparable to state-of-the-art finetuning-based models. However, existing work primarily focuses on English datasets, and it is unknown whether large language models can serve as competitive semantic parsers for other languages. To bridge this gap, our work focuses on cross-lingual Text-to-SQL semantic parsing for translating non-English utterances into SQL queries based on an English schema. We consider a zero-shot transfer learning setting with the assumption that we do not have any labeled examples in the target language (but have annotated examples in English). This work introduces the XRICL framework, which learns to retrieve relevant English exemplars for a given query to construct prompts. We also include global translation exemplars for a target language to facilitate the translation process for large language models. To systematically evaluate our model, we construct two new benchmark datasets, XSPIDER and XKAGGLE-DBQA, which include questions in Chinese, Vietnamese, Farsi, and Hindi. Our experiments show that XRICL effectively leverages large pre-trained language models to outperform existing baselines. Data and code are publicly available at https://github.com/Impavidity/XRICL. + +# 1 Introduction + +Semantic parsing is the task of translating natural language questions into meaning representations such as Lambda CDS (Liang, 2013), Python code (Yin et al., 2018), and SQL (Yu et al., 2018). More recently, Text-to-SQL semantic parsing has attracted attention from academia and industry due to its challenging setup and practical applications. Cross-lingual Text-to-SQL semantic parsing (Sher + +borne and Lapata, 2022b; Min et al., 2019; Sherborne et al., 2020) aims to translate non-English utterances into SQL queries based on an English schema (assuming we have an internationalized database), enabling users to query databases in non-English languages. For example, such a system could help people from around the world access the US government's open data1 with natural language questions in different languages. + +State-of-the-art approaches for Text-to-SQL semantic parsing have been greatly improved by finetuning pre-trained language models as a sequence-to-sequence problem (Scholak et al., 2021; Yin et al., 2020; Herzig et al., 2020; Yu et al., 2021a,b; Shi et al., 2021a). More recently, in-context learning with large language models (LLMs), such as GPT-3 (Brown et al., 2020) and Codex (Chen et al., 2021), has emerged as a new learning paradigm. This paradigm enables effective few-shot learning without model finetuning, showing its practical and scientific value (Beltagy et al., 2022). Recent papers also have shown promising results applying in-context learning to the Text-to-SQL task. Rajkumar et al. (2022) studied if LLMs are already competitive Text-to-SQL semantic parsers without further finetuning on task-specific training data. Additionally, Poesia et al. (2022) and Rubin et al. (2022) investigated the exemplar retrieval problem for the semantic parsing task. + +However, previous work mostly focused on English utterances, leaving other languages behind. It is unclear if LLMs are competitive for cross-lingual Text-to-SQL with English exemplars using in-context learning. Even in the mono-lingual setting (where the exemplars and the query are in the same language), many approaches are not practical beyond English due to the paucity of target language query-SQL exemplars. + +To bridge this gap, we propose XRICL, a novel framework based on LLMs with in-context learn + +![](images/7660cb0b41f79c943239d1a69055dd1de648f92ae98e1cb857a401f0a97a9959.jpg) +Figure 1: Overview of our proposed XRICL framework. Given a labeled English question-SQL candidate pool and the non-English question as input, our framework uses in-context learning with a large pre-trained language model (e.g., Codex) to generate SQL queries in four steps: (1) Cross-lingual Exemplar Retrieval, (2) Exemplar Reranking, (3) Prompt Construction with Translation as Chain-of-Thought, and (4) Inference. + +![](images/a3f0eaca0e178f59f176406ea689e6cb9500ab7431fa90cec1c987b8b4e2ad30.jpg) + +ing for cross-lingual Text-to-SQL semantic parsing. Specifically, the task is to generate SQL queries for non-English queries based on an English schema and an English query-SQL candidate pool. Our framework first constructs the context prompt by retrieving the most relevant English query-SQL exemplars for each target language query. Since we do not have any training data in the target language, we cannot train a retriever for target queries directly. Our solution is to train an English exemplar retriever with mT5 (Xue et al., 2021) and adopt a model-based cross-lingual transfer method for cross-lingual retrieval. The English exemplar retriever is trained with feedback from the LLM itself by distilling soft labels (likelihood). + +Our framework introduces an additional exemplar into the LLM's input context, to instruct the model to translate the target query into English and then to translate the English query into SQL; this approach is inspired by recent work on chain-of-thought prompting (Wei et al., 2022; Shi et al., 2022). However, in our framework, this additional exemplar is identical for all test queries, which means that we only need a single pair of translations for any English-target language pair, requiring minimal translation effort. + +During the inference process, the language model is expected to generate the English translation first and then the SQL query. In our experiments, we find that our proposed retriever and + +reranker can improve the LLMs' cross-lingual few-shot in-context learning performance by a large margin, and further improvements can be observed by adding an additional translation exemplar. + +We further construct two benchmarks, XSPIDER and XKAGGLE-DBQA, to systematically evaluate the proposed framework in many languages. For XSPIDER, besides adopting existing work, including CSPIDER (Min et al., 2019) and VSPIDER (Tuan Nguyen et al., 2020), we further translate the SPIDER dataset into Farsi and Hindi for evaluation. For XKAGGLE-DBQA, we translate the English KAGGLE-DBQA dataset into Chinese, Farsi, and Hindi. Experimental results show that our proposed framework improves effectiveness compared to baseline systems. + +Our contributions are summarized as follows: (1) We propose a novel retrieve-erank framework to improve the exemplar selection process for incontext learning for cross-lingual Text-to-SQL semantic parsing. To the best of our knowledge, we are the first to explore the effectiveness of large pre-trained language models for cross-lingual Text-to-SQL semantic parsing. (2) We propose to use translation as a chain-of-thought prompt in the inference process, bridging the cross-lingual gap for large language models. (3) Last, we construct two new benchmarks, XSPIDER and XKAGGLE-DBQA, to facilitate evaluation of cross-lingual Text-to-SQL semantic parsing. + +# 2 Task Formulation + +Given a database where the schema $s$ is in English (denoted as the source language), our task is to translate a non-English (denoted the target language) example $x$ ( $x$ includes utterance $u$ and schema $s$ ) into a SQL query $a$ . In this work, we explore large pre-trained language models such as Codex for this Text-to-SQL task with in-context learning. To support in-context learning, labeled candidates of (utterance, schema, SQL) triples are required. Since more annotated resources are available in English, we assume that the labeled candidate set $D$ is in English. Overall, in-context learning is an efficient method to leverage large pre-trained language models without expensive parameter fine-tuning. Furthermore, the candidate pool can be easily expanded for better generalization to new domains. + +# 3 The XRICL Framework + +Our XRICL framework is shown in Figure 1, consisting of four steps: + +(1) Cross-lingual Exemplar Retrieval: Retrieve a list of $N$ English exemplars that are relevant to the input non-English example $x$ . +(2) Exemplar Reranking: Rerank the retrieved $N$ exemplars and use the top $K$ exemplars to construct prompts. +(3) Prompt Construction with Translation as Chain of Thought: Construct a prompt consisting of the translation exemplar as a chain of thought, the selected $K$ exemplars, and the input example. +(4) Inference: Feed the prompt into a pre-trained language model to generate SQL. + +# 3.1 Cross-lingual Exemplar Retriever + +Given a non-English question, the goal of the cross-lingual exemplar retriever is to find relevant exemplars from the English candidate pool efficiently that can improve the predictions of the generators. Considering that we use labeled examples in English (a high-resource language) as candidates, we formulate this step as a cross-lingual retrieval problem, where the test question is in a non-English language. In this case, traditional term matching methods such as BM25 (Robertson and Zaragoza, 2009) or BM25 + RM3 query expansion (Lin, 2018) cannot be applied due to token mismatch. Instead, we propose to use a bi-encoder for cross-lingual semantic retrieval with model-based zero-shot transfer. + +We further improve the retriever with distillation-based training. + +Model. Here, we leverage the popular bi-encoder architecture known as dense passage retriever (DPR) (Karpukhin et al., 2020), where the query and candidates are mapped into representation vectors independently. The retriever uses a dense encoder $\mathrm{E}_u(\cdot)$ that converts an utterance into a $d$ -dimensional vector and builds an index over the candidate pool that is used for retrieval. + +For a test instance $x$ , we use the same dense encoder to map the utterance into a $d$ -dimensional vector (denoted the query vector). Based on the query vector, the closest top $N$ exemplars are retrieved from the pre-built index based on the predefined distance function. Following Karpukhin et al. (2020), we define the distance function as + +$$ +\operatorname {s i m} (x, z) = \mathrm {E} _ {u} (x) ^ {\top} \mathrm {E} _ {u} (z) \tag {1} +$$ + +where $Z$ is the set of candidate exemplars and $z \in Z$ . We use a transformer as the dense encoder, and the average of the contextual embeddings of the utterance tokens is taken as the representation of the encoded text. + +Model-based Cross-lingual Transfer. Considering that we do not have training data in target languages, we adopt a model-based cross-lingual transfer method, where we leverage the zero-shot cross-lingual transfer ability of multilingual pre-trained transformers such as mBERT (Devlin et al., 2019), XLM-Roberta (Conneau et al., 2020), mBART (Liu et al., 2020), and mT5 (Xue et al., 2021). Specifically, we train the dense retriever in the source language, where both the query utterance and candidate utterances are in English (in our case), and apply inference directly on query utterances in the target language and retrieve English exemplars in a cross-lingual manner. + +Distillation-based Training. One common practice for bi-encoder training is contrastive learning. Given a query, positive examples and negative examples are required. The model is optimized such that examples from the positive class have similar representations and examples from the negative class have different representations. + +The key here is how to define positive and negative examples for the semantic parsing task. Recently, Hu et al. (2022) used the similarity of target meaning representations to first rank the candidates and choose the top- $k$ as positive examples and the bottom- $k$ as negative examples. Instead + +![](images/1ac1b49c98c7abb06464670a07e8d4b84eb4d8c031f833f1c7d49648ae6dc216.jpg) +Figure 2: Illustration of distillation-based training. The contribution distribution is the likelihood distribution of the top- $N$ exemplars produced by the LLM. The relevance distribution is the ranking score distribution produced by the retriever. + +of using human-designed relevance metrics, Rubin et al. (2022) proposed to use a language model to label positive and negative examples for contrastive learning; similar to Hu et al. (2022), hard labels are used. Another way to train the bi-encoder is to use a regression-based loss function. Poesia et al. (2022) proposed to retrieve exemplars that have relevant program structures (tree edit distance of SQL abstract syntax trees is used as the relevance metric) for the test utterances and the model is optimized with mean-squared error loss for predicting the similarity score. + +As an alternative to these above approaches, we train our retriever by distilling the LLM's scoring function. This scoring function calculates the ground-truth SQL query's likelihood given an English exemplar $z_{k}$ and the input utterance $x$ , which estimates the importance of this exemplar for parsing the given input utterance. Hence, we score the retrieved English exemplars with an LLM and optimize the KL divergence between the LLM's ranking scores and the retriever's ranking scores to update the retriever, as shown in Figure 2. This retriever is denoted DE-Retriever (Distillation-based Exemplar Retriever). Intuitively, with the KL divergence loss function, the model tries to match the probability of retrieving an exemplar $z_{k}$ with the contribution of that exemplar to the generated SQL query $a$ . + +We first obtain $N$ exemplars with the highest scores based on Equation (1), denoted as $Z_{top-N}$ . Our loss function is defined as: + +$$ +\begin{array}{r} \mathcal {L} _ {\text {d i s t i l l}} = \mathrm {K L} (\mathrm {S G} (p (z _ {n} \mid x, a, Z _ {t o p - N}; G)) \\ \| p (z _ {n} \mid x, Z; E)), \end{array} \tag {2} +$$ + +where SG denotes the stop gradient operation, $G$ denotes the generator, and $E$ denotes the retriever encoder. We further compute $p(z_{n} \mid x, a, Z_{top-N}; G)$ as follows: + +$$ +\begin{array}{l} p \left(z _ {n} \mid x, a, Z _ {t o p - N}\right) \propto \\ p \left(a \mid x, z _ {n}, Z _ {t o p - N}; G\right) p \left(z _ {n} \mid x, Z _ {t o p - N}\right) \end{array} \tag {3} +$$ + +We approximate the posterior under the assumption that we have a uniform prior over the set of retrieved exemplars, so $p(z_{n} \mid x, Z_{top-N})$ is approximated as $\frac{1}{N}$ . We further compute $p(a \mid x, z_{n}, Z_{top-N}; G)$ as: + +$$ +\frac {\exp \left(p \left(a \mid x , z _ {n}\right)\right)}{\sum_ {j = 1} ^ {N} \exp \left(p \left(a \mid x , z _ {j}\right)\right)} \tag {4} +$$ + +where $p(a \mid x, z_j)$ is computed with the generator. + +More specifically, we use example $z_{j}$ as the prompt and concatenate it with test instance $u$ and target SQL $a$ . Then we feed it to the generator to compute the log probability of each token $\log(p(a_i))$ in the target SQL query $a$ ; $p(a \mid x, z_j)$ can be computed as $\exp(\sum \log(p(a_i)))$ . + +# 3.2 Exemplar Reranking + +For tasks such as information retrieval and open-domain question answering, reranking is widely adopted to further improve retrieval results by incorporating a reranker. Such a two-stage procedure is also useful in a variety of natural language processing tasks. In this work, following the retrieval-and-rerank idea, we propose to incorporate an exemplar reranker in our framework. This reranker can leverage token-level interactions between the utterances to better rank the exemplars. + +More specifically, the query utterance $u$ and the candidate utterance $u_{z}$ are concatenated together with special tokens: [CLS] $u$ [SEP] $u_{z}$ [SEP]. The tokenized input is fed into a transformer model. An MLP with sigmoid activation is applied on top of the contextual embedding of the [CLS] token to obtain the relevance score of the candidate example (Lin et al., 2021). Sigmoid cross-entropy loss is used and the model is optimized to produce a relevance score as $p(a|x,z_n,Z_{top-N};G)$ . This reranker is denoted DE-Reranker (Distillation-based Exemplar Reranker). + +# 3.3 Prompt Construction with Translation as Chain of Thought + +From the input instance $x$ and the list of retrieved- and-eranked exemplars $Z$ , we construct the augmented query by concatenating exemplars with the input instance following previous work (Hu et al., 2022; Rubin et al., 2022; Poesia et al., 2022; Liu + +et al., 2022; Brown et al., 2020; Pasupat et al., 2021). For the exemplar, we linearize the table schema, the question, and the SQL query. The exemplars are sorted by relevance score in descending order. For the test instance, only the table schema and the question are linearized. We denote this prompting approach Vanilla-P. + +Translation as Chain of Thought: Recent work on chain-of-thought prompting is designed to solve the multi-step reasoning problem by providing intermediate reasoning steps before the final answer in the prompt (Wei et al., 2022). Inspired by this, we use the translation pair (from non-English to English in our case) as an intermediate step for cross-lingual semantic parsing inference. + +Specifically, a translation-based exemplar is inserted in front of $Z$ . For example, in the right part of Figure 1, the grey box contains the Chinese version of the translation as a chain-of-thought prompt. The question in the prompt is in the target language, followed by an instruction Translate into English and the English translation of the question. Note that this translation-based exemplar is shared among all the test instances in that language, as shown in the left part of Figure 1. The translation-based examples are indexed by the language code, such as zh and vi. In this way, it only requires minimal translation effort to build the global translation-based exemplar. We denote this prompting approach Translation-P. + +# 3.4 Inference + +For inference, we feed the constructed prompt to a large pre-trained language model to generate the target SQL query with greedy decoding. In this work, we consider Codex (Codex-Davinci-001) (Chen et al., 2021) because it has shown superior performance for the English Text-to-SQL task (Poesia et al., 2022). + +# 4 Experimental Settings + +In this section, we describe the datasets, implementation details, and baselines for our experiments. + +# 4.1 Datasets + +We create two benchmarks, XSPIDER and XKAGGLE-DBQA, by translating existing English Text-to-SQL datasets into other languages and evaluate our methods on these two benchmarks. + +XSPIDER: CSPIDER (Min et al., 2019) and VSPIDER (Tuan Nguyen et al., 2020) are Chinese (zh) + +and Vietnamese (vi) cross-domain Text-to-SQL datasets translated from SPIDER (Yu et al., 2018). More specifically, we use the English SPIDER training set as the candidate pool and training data for retriever-eranker models. We use the development sets of CSPIDER and VSPIDER for cross-lingual evaluation. We further translate the SPIDER development set into Farsi (fa) and Hindi (hi) for a more comprehensive evaluation. + +XKAGGLE-DBQA: This is a recently constructed dataset for more realistic and challenging Text-to-SQL evaluation. The dataset is based on 8 databases from Kaggle. We translate the questions into Chinese (zh), Farsi (fa), and Hindi (hi) for cross-lingual evaluation. We use the English SPIDER training set as the candidate pool. + +# 4.2 Experimental Details + +For the exemplar retriever, we use 24-layer transformers initialized with the parameters of the mT5 encoder that is then fine-tuned on the English SPIDER dataset for the Text-to-SQL task. For the exemplar reranker, we use InfoXLM (Chi et al., 2021) as the starting point. We train the retriever and reranker on the English SPIDER dataset and then apply both models to cross-lingual retrieval and reranking in a zero-shot fashion. For the Codex configuration, we use greedy decoding by setting the temperature to zero. We use $N = 16$ and $K = 8$ for all experiments, which means that the DE-Retriever first retrieves 16 exemplars from the candidate pool and the DE-Reranker produces the top 8 exemplars for prompt construction. + +In terms of evaluation metrics, we use Exact Match (EM) accuracy for both the XSPIDER benchmark and the XKAGGLE-DBQA benchmark. Following Zhong et al. (2020), we report the Testsuite (TS) accuracy. Only the datasets that are aligned with the SPIDER dev set can be evaluated with TS accuracy, so the XKAGGLE-DBQA benchmark is not applicable. Because the CSPIIDER dev set is only partially aligned to the SPIDER dev set, the full CSPIIDER (zh-full) dev set can be only evaluated with EM accuracy. We collect a subset of the CSPIIDER dev set (zh) whose queries are aligned with the English SPIDER dev set, and further evaluate these using TS accuracy. + +# 4.3 Baselines + +mT5 zero-shot transfer is a baseline model that is trained with the English SPIDER training set. + +
    Modelzh-fullzhvifahi
    EMEMTSEMTSEMTSEMTS
    (1) mT5 zero-shot39.747.948.442.140.141.339.541.239.7
    (2) mUSE38.443.046.831.833.428.931.122.223.7
    (3) mSBERT37.941.347.134.633.529.331.822.022.3
    (4) mT5-encoder44.448.151.441.339.538.438.528.627.0
    (5) DE-Retriever46.050.453.942.240.738.240.029.927.9
    (6) DE-R246.452.155.344.441.940.040.630.028.2
    (7) + Translation-P47.452.755.743.743.643.245.132.632.4
    + +Table 1: Results on the XSPIDER dev set. "zh-full" and "zh" are two different splits from CSPIDER (Min et al., 2019). EM and TS are exact match accuracy and test suite accuracy, respectively. Entry (5) is based on the DE-Retriever with Vanilla-P. Entry (6) is based on the DE-Retriever and DE-Reranker (denoted as DE- $\mathbf{R}^2$ ) with Vanilla-P. Entry (7) is based on DE- $\mathbf{R}^2$ with Translation-P. + +The model is based on the pre-trained sequence-to-sequence multilingual language model mT5-large (Xue et al., 2021). This model has zero-shot cross-lingual transfer ability, with which the model can directly handle non-English utterances. + +mUSE and mSBERT are baselines that use unsupervised retrievers to obtain exemplars: multilingual Universal Sentence Encoder (Yang et al., 2020) and multilingual Sentence-BERT (Reimers and Gurevych, 2019). Prompts are then constructed for in-context learning with Codex. + +# 5 Results + +# 5.1 Results on XSPIDER + +Results on XSPIDER are shown in Table 1. We report the EM and TS accuracy. For the full CSPI- DER dataset (zh-full), since TS Accuracy is not supported, we only report EM accuracy. We report both TS and EM accuracy on the subset of CSPI- DER. Entry (1) reports the zero-shot performance of the mT5 model that is trained on the English SPiDER dataset. On zh-full, vi, fa, and hi, the mT5 zero-shot method obtains on average 41.1 EM accuracy and 39.8 TS accuracy (average TS accuracy is computed without zh-full because the metric cannot be computed on the full CSPIDER). + +From entry (2) to entry (7), the methods are based on in-context few-shot learning. For entries (2-6), the prompting method is Vanilla-P. For entry (7), prompting with Translation-P is applied. + +With unsupervised exemplar retrievers such as mUSE and mSBERT, shown in entries (2) and (3), Codex performs worse than mT5 zero-shot transfer, especially for Farsi $(39.5\rightarrow 31.1 / 31.8$ on TS accuracy) and Hindi $(39.7\rightarrow 23.7 / 22.3$ on TS accuracy). By switching the unsupervised exemplar retriever to the mT5-encoder, which is the encoder compo + +nent of the fine-tuned mT5 model, the effectiveness of Codex improves by a large margin. For example, on the CSPIDER subset, TS accuracy improves to 51.4 from 47.1, outperforming mT5 zero-shot performance by 3 points. This indicates that the exemplar retrieval component is essential to take advantage of the competitive performance of LLMs such as Codex. For languages such as Vietnamese and Farsi, Codex is comparable to mT5 zero-shot transfer, while for Hindi, there is still a large gap (39.7 vs. 27.0 on TS accuracy). + +By applying our proposed distillation based retriever-reranker pipeline (denoted as DE- $\mathbf{R}^2$ ) for retrieving exemplars, impressive improvements can be observed in all four languages by comparing entry (6) with entry (4). Our end-to-end results are shown in entry (7), where we see that our proposed framework achieves the best results for most of the languages (except Vietnamese EM accuracy) in the in-context learning setting. + +Comparing the best results of in-context learning with mT5 zero-shot results, we can see that Codex can achieve better performance in Chinese, Vietnamese, and Farsi. For example, XRICL outperforms mT5 zero-shot by 7.7 EM accuracy on the full dev set of CSPIDER. One exception is Hindi, where the best in-context learning performance cannot match mT5 zero-shot transfer. One possible explanation is that Codex has weaker modeling ability in Hindi because less Hindi data were accessible during the training. + +# 5.2 Results on XKAGGLE-DBQA + +There is agreement by researchers today that XKAGGLE-DBQA is a more realistic evaluation for the Text-to-SQL parsing task. The databases are real-world databases with abbreviated column + +
    Modelzhfahi
    (1) mT5 zero-shot9.78.17.6
    (2) mUSE20.712.416.2
    (3) mSBERT14.713.011.9
    (4) mT5-Encoder22.216.816.2
    (5) DE-Retriever26.518.416.8
    (6) DE-R227.018.417.8
    (7) + Translation-P28.120.019.5
    + +names. We use the training set of English SPIDER as the candidate pool. In this case, both the model's generalization ability and its cross-lingual transfer capability can be tested. + +The XKAGGLE-DBQA results are shown in Table 2. Entry (1) shows the zero-shot cross-lingual cross-domain transfer performance of the mT5 model trained on the English SPIDER dataset. For example, on Chinese KAGGLE-DBQA, mT5 only obtains 9.7 EM accuracy. For comparison, mT5 reach 20.0 EM accuracy on the English test set in a zero-shot fashion, outperforming the previous state of the art obtained by RAT-SQL (Wang et al., 2020) with 18.4 EM accuracy (Lee et al., 2021) using column descriptions and model adaptation. This indicates that the mT5 model is more robust than RAT-SQL on domain transfer. However, the effectiveness degrades drastically when mT5 is applied to non-English languages. The mT5 zero-shot method on average obtains only 8.5 EM accuracy in the three languages. + +For the Codex-based in-context learning methods, the results are shown in entries (2-7). With unsupervised retrieval methods such as mUSE, Codex can reach 20.7 EM accuracy in Chinese, improving over the zero-shot mT5 baseline. Comparing entries (2) and (3), there is no clear winner for these two unsupervised retrieval methods. Our end-to-end results are shown in entry (7), which achieves state-of-the-art performance on the XKAGGLE-DBQA benchmark, with 22.5 EM accuracy on average, which is better than the mT5 zero-shot method. For example, on Chinese KAGGLE-DBQA, our framework obtains an 18.4 point improvement over mT5 zero-shot transfer. + +# 6 Analysis + +# 6.1 Effectiveness on English Text-to-SQL + +We show that our model is comparable to other in-context learning methods for English semantic + +Table 2: Results on the XKAGGLE-DBQA test set. We report exact match (EM) accuracy. + +
    ModelEMEXTS
    Rubin et al. (2022) (our impl.)48.553.550.3
    Poesia et al. (2022)-60.0-
    Rajkumar et al. (2022)-67.055.1
    DE-Retriever (Ours)53.560.356.3
    + +Table 3: Results on the English SPIDER development set. Our system achieves results comparable to other state-of-the-art in-context learning methods for English Text-to-SQL. EM: Exact Match Accuracy. EX: Execution Accuracy. TS: Test-suite Accuracy (Zhong et al., 2020). + +Parsing. Through this comparison, we show that our framework is built on a competitive backbone for Text-to-SQL. We use the DE-R retriever as the backbone model in the ablation study and compare with three recent methods, described as follows: Rubin et al. (2022) used hard labels obtained from the generator to train the retriever. Poesia et al. (2022) used the tree edit distance of SQL queries as a similarity function: a smaller distance means better exemplar quality for the specific test instance. The ranking model is optimized to predict the target SQL pair tree edit distance based on the utterance pair. Rajkumar et al. (2022) designed an efficient prompt that leverages table contents for zero-shot Text-to-SQL. We refer the reader to the original papers for more details. + +Table 3 shows the results on the SPIDER development set. Our backbone system (DE-Retriever + Codex Generator) obtains 53.5 EM accuracy and 60.3 EX accuracy, which is comparable to the 60.0 EX accuracy reported by Poesia et al. (2022). Comparing to Rajkumar et al. (2022), our system obtains comparable TS accuracy (56.3 vs. 55.1). + +# 6.2 Effectiveness of DE-R² + +We analyze the effectiveness of DE-R $^2$ on the XSPIDER benchmark and the XKAGGLE-DBQA benchmark. By comparing entries (5) and (4) in Table 1 and Table 2, we can observe that the DERetriever can improve over the mT5-encoder baseline in most of the languages (except EM accuracy in Farsi). Comparing entries (6) and (5), we find that the reranker can further improve the EM accuracy and the TS accuracy. This indicates that our XRICL framework is effective in selecting good exemplars as prompts. + +# 6.3 Effectiveness of Chain-of-Thought Prompt + +By comparing entries (7) and (6) in Table 1 and Table 2, we find that Translation-P can further im + +
    Modelzh-fullzh
    EMEMTS
    (1) DE-R2+ Translation-P47.452.755.7
    (2) T-Oracle46.352.657.6
    (3) TG-Oracle52.558.062.2
    + +Table 4: Results with oracles: T-Oracle is the Template Oracle and TG-Oracle is the Template+Generator Oracle. EM accuracy and TS accuracy are reported. + +prove the semantic parsing ability of Codex on top of DE- $\mathbf{R}^2$ , except EM accuracy for Vietnamese. + +# 6.4 Oracle Performance + +It is interesting to investigate the upper bound of Codex on cross-lingual Text-to-SQL semantic parsing. We design two pipelines to experiment with the capabilities of Codex when an oracle is available (i.e., the target SQL query is accessible to help the retrieval and reranking). We experiment with two different oracles: + +Template Oracle: We retrieve exemplars using the gold parse. The template is extracted from the target SQL query and only exemplars with the same SQL template are retrieved. This is based on the assumption that utterances with the same SQL templates share the same query intent and the generator can benefit from these exemplars. + +Template Oracle + Codex LM oracle: Here we introduce an oracle from the generator (Codex) into the pipeline. More specifically, we replicate the training process in the testing phase. The exemplars with the same SQL templates are first retrieved. For each retrieved exemplar, we use Codex to compute its contribution to the test instance as the reranking score. We then use the top- $k$ as the exemplars. + +The experimental results are shown in Table 4. Comparing entries (1) and (2), we can observe that our XRICL framework can outperform the Template Oracle in terms of EM accuracy on the full dataset and is comparable on the subset. Template Oracle + Codex LM Oracle reaches 52.5 on the full dataset and 58.0 on the subset in terms of EM accuracy. This suggests that signals from the Codex LM are useful and that there is additional room for improvement in our framework. + +# 7 Related Work + +In-context Learning: In-context learning is a relatively new paradigm for zero-shot and few-shot + +learning with large-scale pre-trained language models, first proposed in GPT-3 (Brown et al., 2020). In-context learning for semantic parsing has been intensively investigated recently (Pasupat et al., 2021; Rubin et al., 2022; Shin and Van Durme, 2022; Rajkumar et al., 2022; Hu et al., 2022; Xie et al., 2022; Chen et al., 2021; Poesia et al., 2022). However, most of the work considers only English, without examining the cross-lingual ability of the proposed methods. Winata et al. (2021) evaluated the multilinguality of pre-trained language models on non-English multi-class classification with in-context learning. However, their task is simpler than semantic parsing tasks such as ours. To the best of our knowledge, we are the first to explore cross-lingual Text-to-SQL semantic parsing under the in-context learning setting. + +Cross-lingual Semantic Parsing: Cross-lingual semantic parsing aims to handle user utterances from multiple languages and translate them into formal representations. Recent advances can be categorized into two threads: multilingual dataset creation and model development. + +For example, Bai et al. (2018) adapted a Chinese dialogue parsing dataset into English. Min et al. (2019) and Tuan Nguyen et al. (2020) adapted the English Text-to-SQL dataset SPIDER (Yu et al., 2018) into Chinese and Vietnamese, which are used in this work for evaluation. Some multilingual datasets with different formal representations have also been created, such as SPARQL (Cui et al., 2022) and TOP (Li et al., 2021). + +In terms of model development, Shao et al. (2020) is the most relevant to our work, which leveraged bilingual input for the semantic parsing task. However, they used RNN models and focused on multilingual representation alignment with pre-training. Instead, our work focuses on representation mixup with large multilingual pretrained models. Improving cross-lingual zero-shot transfer is another direction (Sherborne et al., 2020; Sherborne and Lapata, 2022b,a). + +Multilingual and Cross-lingual Retrieval: In multilingual retrieval, the task is to retrieve relevant documents where the user queries and the corpora are in the same language. Recent work takes advantage of cross-language transfer using pre-trained multilingual models (Shi et al., 2020, 2021b; Zhang et al., 2022b, 2021). For example, Shi et al. (2021b) used DPR to retrieve documents based on ad-hoc queries in six languages. On the + +other hand, cross-lingual retrievers help users find relevant documents in languages that are different from that of the queries. This task has a long history that goes back several decades (Nie, 2010), but recent work includes Zhang et al. (2022a); Litschko et al. (2022); Sun and Duh (2020). For instance, Asai et al. (2021) created a cross-lingual open-domain question answering dataset where the system is required to retrieve passages from different languages to answer user questions. + +# 8 Conclusion + +In this work, we proposed the XRICL framework that improves in-context learning for cross-lingual Text-to-SQL semantic parsing. The retrieve-and-rerank models that we propose can learn signals from large pre-trained models (Codex) to improve the quality of selected exemplars, which can further benefit the generator. By integrating prompts inspired by chain of thought, our proposed Translation-P method can bridge the cross-lingual gap for the generator. Extensive experiments on XSPIDER and XKAGGLE-DBQA demonstrate the effectiveness of our framework, which obtains state-of-the-art performance on few-shot in-context learning in most of the datasets, thus unlocking the potential of Codex. + +# 9 Limitations + +Our work is based on the large language model Codex, which is not open-sourced. To replicate our experiments, an application to OpenAI for Codex API access is required. Due to annotation costs, we were unable to evaluate on more languages than those described in this paper. In the future, we plan to collect more data to investigate Codex performance on different language families. + +# Acknowledgements + +This research was supported in part by the Natural Sciences and Engineering Research Council (NSERC) of Canada, Compute Ontario, and Compute Canada. + +# References + +Akari Asai, Jungo Kasai, Jonathan Clark, Kenton Lee, Eunsol Choi, and Hannaneh Hajishirzi. 2021. XOR QA: Cross-lingual open-retrieval question answering. In Proceedings of the 2021 Conference of the North + +American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 547-564, Online. +He Bai, Yu Zhou, Jiajun Zhang, Liang Zhao, Mei-Yuh Hwang, and Chengqing Zong. 2018. Source critical reinforcement learning for transferring spoken language understanding to a new language. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3597-3607, Santa Fe, New Mexico, USA. +Iz Beltagy, Arman Cohan, Robert Logan IV, Sewon Min, and Sameer Singh. 2022. Zero- and few-shot NLP with pretrained language models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts, pages 32-37, Dublin, Ireland. +Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D. Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in Neural Information Processing Systems, 33:1877-1901. +Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. 2021. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374. +Zewen Chi, Li Dong, Furu Wei, Nan Yang, Saksham Singhal, Wenhui Wang, Xia Song, Xian-Ling Mao, Heyan Huang, and Ming Zhou. 2021. InfoXLM: An information-theoretic framework for cross-lingual language model pre-training. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3576-3588, Online. +Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettle-moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8440-8451, Online. +Ruixiang Cui, Rahul Aralikatte, Heather Lent, and Daniel Hershcovich. 2022. Compositional generalization in multilingual semantic parsing over Wikipedia. Transactions of the Association for Computational Linguistics, 10:937-955. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, Minnesota. + +Jonathan Herzig, Pawel Krzysztof Nowak, Thomas Müller, Francesco Piccinno, and Julian Eisenschlos. 2020. TaPas: Weakly supervised table parsing via pre-training. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4320-4333, Online. +Yushi Hu, Chia-Hsuan Lee, Tianbao Xie, Tao Yu, Noah A. Smith, and Mari Ostendorf. 2022. Incontext learning for few-shot dialogue state tracking. arXiv preprint arXiv:2203.08568. +Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769-6781, Online. +Chia-Hsuan Lee, Oleksandr Polozov, and Matthew Richardson. 2021. KaggleDBQA: Realistic evaluation of text-to-SQL parsers. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), Online. +Haoran Li, Abhinav Arora, Shuohui Chen, Anchit Gupta, Sonal Gupta, and Yashar Mehdad. 2021. MTOP: A comprehensive multilingual task-oriented semantic parsing benchmark. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2950-2962, Online. +Percy Liang. 2013. Lambda dependency-based compositional semantics. arXiv preprint arXiv:1309.4408. +Jimmy Lin. 2018. The neural hype and comparisons against weak baselines. SIGIR Forum, 52(2):40-51. +Jimmy Lin, Rodrigo Nogueira, and Andrew Yates. 2021. Pretrained Transformers for Text Ranking: BERT and Beyond. Morgan & Claypool Publishers. +Robert Litschko, Ivan Vulic, Simone Paolo Ponzetto, and Goran Glavaš. 2022. On cross-lingual retrieval with multilingual text encoders. Information Retrieval Journal, 25(2):149-183. +Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, and Weizhu Chen. 2022. What makes good in-context examples for GPT-3? In Proceedings of Deep Learning Inside Out (DeeLIO 2022): The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures, pages 100–114, Dublin, Ireland and Online. +Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pretraining for neural machine translation. Transactions of the Association for Computational Linguistics, 8:726-742. + +Qingkai Min, Yuefeng Shi, and Yue Zhang. 2019. A pilot study for Chinese SQL semantic parsing. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3652-3658, Hong Kong, China. +Jian-Yun Nie. 2010. *Cross-Language Information Retrieval*. Morgan & Claypool Publishers. +Panupong Pasupat, Yuan Zhang, and Kelvin Guu. 2021. Controllable semantic parsing via retrieval augmentation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7683-7698, Online and Punta Cana, Dominican Republic. +Gabriel Poesia, Alex Polozov, Vu Le, Ashish Tiwari, Gustavo Soares, Christopher Meek, and Sumit Gulwani. 2022. Synchronesh: Reliable code generation from pre-trained language models. In International Conference on Learning Representations. +Nitarshan Rajkumar, Raymond Li, and Dzmitry Bahdanau. 2022. Evaluating the text-to-SQL capabilities of large language models. arXiv preprint arXiv:2204.00498. +Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982-3992, Hong Kong, China. +Stephen Robertson and Hugo Zaragoza. 2009. The probabilistic relevance framework: BM25 and beyond. Foundations and Trends in Information Retrieval, 3(4):333-389. +Ohad Rubin, Jonathan Herzig, and Jonathan Berant. 2022. Learning to retrieve prompts for in-context learning. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2655-2671, Seattle, United States. +Torsten Scholak, Nathan Schucher, and Dzmitry Bah-danau. 2021. PICARD: Parsing incrementally for constrained auto-regressive decoding from language models. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 9895-9901, Online and Punta Cana, Dominican Republic. +Bo Shao, Yeyun Gong, Weizhen Qi, Nan Duan, and Xiaola Lin. 2020. Multi-level alignment pretraining for multi-lingual semantic parsing. In Proceedings of the 28th International Conference on Computational Linguistics, pages 3246-3256. +Tom Sherborne and Mirella Lapata. 2022a. Meta-learning a cross-lingual manifold for semantic parsing. arXiv preprint arXiv:2209.12577. + +Tom Sherborne and Mirella Lapata. 2022b. Zero-shot cross-lingual semantic parsing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4134-4153, Dublin, Ireland. +Tom Sherborne, Yumo Xu, and Mirella Lapata. 2020. Bootstrapping a crosslingual semantic parser. arXiv preprint arXiv:2004.02585. +Freda Shi, Mirac Suzgun, Markus Freitag, Xuezhi Wang, Suraj Srivats, Soroush Vosoughi, Hyung Won Chung, Yi Tay, Sebastian Ruder, Denny Zhou, et al. 2022. Language models are multilingual chain-of-thought reasoners. arXiv preprint arXiv:2210.03057. +Peng Shi, He Bai, and Jimmy Lin. 2020. Cross-lingual training of neural models for document ranking. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, pages 2768–2773. +Peng Shi, Patrick Ng, Zhiguo Wang, Henghui Zhu, Alexander Hanbo Li, Jun Wang, Cicero Nogueira dos Santos, and Bing Xiang. 2021a. Learning contextual representations for semantic parsing with generation-augmented pre-training. Proceedings of the AAAI Conference on Artificial Intelligence, 35(15):13806-13814. +Peng Shi, Rui Zhang, He Bai, and Jimmy Lin. 2021b. Cross-lingual training of dense retrievers for document retrieval. In Proceedings of the 1st Workshop on Multilingual Representation Learning, pages 251-253, Punta Cana, Dominican Republic. +Richard Shin and Benjamin Van Durme. 2022. Few-shot semantic parsing with language models trained on code. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5417-5425, Seattle, United States. +Shuo Sun and Kevin Duh. 2020. CLIRMatrix: A massively large collection of bilingual and multilingual datasets for cross-lingual information retrieval. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4160-4170. +Anh Tuan Nguyen, Mai Hoang Dao, and Dat Quoc Nguyen. 2020. A pilot study of text-to-SQL semantic parsing for Vietnamese. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, pages 4079-4085, Online. +Bailin Wang, Richard Shin, Xiaodong Liu, Oleksandr Polozov, and Matthew Richardson. 2020. RAT-SQL: Relation-aware schema encoding and linking for text-to-SQL parsers. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7567-7578, Online. +Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022. Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903. + +Genta Indra Winata, Andrea Madotto, Zhaojiang Lin, Rosanne Liu, Jason Yosinski, and Pascale Fung. 2021. Language models are few-shot multilingual learners. In Proceedings of the 1st Workshop on Multilingual Representation Learning, pages 1-15, Punta Cana, Dominican Republic. +Tianbao Xie, Chen Henry Wu, Peng Shi, Ruiqi Zhong, Torsten Scholak, Michihiro Yasunaga, Chien-Sheng Wu, Ming Zhong, Pengcheng Yin, Sida I. Wang, et al. 2022. UnifiedSKG: Unifying and multi-tasking structured knowledge grounding with text-to-text language models. arXiv preprint arXiv:2201.05966. +Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual pre-trained text-to-text transformer. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 483-498, Online. +Yinfei Yang, Daniel Cer, Amin Ahmad, Mandy Guo, Jax Law, Noah Constant, Gustavo Hernandez Abrego, Steve Yuan, Chris Tar, Yun-hsuan Sung, Brian Strope, and Ray Kurzweil. 2020. Multilingual universal sentence encoder for semantic retrieval. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 87-94, Online. +Pengcheng Yin, Graham Neubig, Wen-tau Yih, and Sebastian Riedel. 2020. TaBERT: Pretraining for joint understanding of textual and tabular data. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8413-8426, Online. +Pengcheng Yin, Chunting Zhou, Junxian He, and Graham Neubig. 2018. StructVAE: Tree-structured latent variable models for semi-supervised semantic parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Melbourne, Australia. +Tao Yu, Chien-Sheng Wu, Xi Victoria Lin, Bailin Wang, Yi Chern Tan, Xinyi Yang, Dragomir Radev, Richard Socher, and Caiming Xiong. 2021a. GraPPa: Grammar-augmented pre-training for table semantic parsing. In International Conference on Learning Representations. +Tao Yu, Rui Zhang, Alex Polozov, Christopher Meek, and Ahmed Hassan Awadallah. 2021b. SCoRe: Pretraining for context representation in conversational semantic parsing. In International Conference on Learning Representations. +Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, Zilin Zhang, and Dragomir Radev. 2018. Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-SQL task. In Proceedings of the 2018 + +Conference on Empirical Methods in Natural Language Processing, pages 3911-3921, Brussels, Belgium. +Fuwei Zhang, Zhao Zhang, Xiang Ao, Dehong Gao, Fuzhen Zhuang, Yi Wei, and Qing He. 2022a. Mind the gap: Cross-lingual information retrieval with hierarchical knowledge enhancement. Proceedings of the AAAI Conference on Artificial Intelligence, 36(4):4345-4353. +Xinyu Zhang, Xueguang Ma, Peng Shi, and Jimmy Lin. 2021. Mr. TyDi: A multi-lingual benchmark for dense retrieval. In Proceedings of the 1st Workshop on Multilingual Representation Learning, pages 127–137, Punta Cana, Dominican Republic. +Xinyu Zhang, Kelechi Ogueji, Xueguang Ma, and Jimmy Lin. 2022b. Towards best practices for training multilingual dense retrieval models. arXiv preprint arXiv:2204.02363. +Ruiqi Zhong, Tao Yu, and Dan Klein. 2020. Semantic evaluation for text-to-SQL with distilled test suites. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 396-411, Online. \ No newline at end of file diff --git a/xriclcrosslingualretrievalaugmentedincontextlearningforcrosslingualtexttosqlsemanticparsing/images.zip b/xriclcrosslingualretrievalaugmentedincontextlearningforcrosslingualtexttosqlsemanticparsing/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..44ebae21709e9ba7e635472dc8d6a668e0c04243 --- /dev/null +++ b/xriclcrosslingualretrievalaugmentedincontextlearningforcrosslingualtexttosqlsemanticparsing/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5952426f16d7ca35ab6f903b8266d527f18893007f7a02a54f9ae4e78bc7cbcf +size 227736 diff --git a/xriclcrosslingualretrievalaugmentedincontextlearningforcrosslingualtexttosqlsemanticparsing/layout.json b/xriclcrosslingualretrievalaugmentedincontextlearningforcrosslingualtexttosqlsemanticparsing/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..6b0149a58313f7df7ead38dce4671638a405b868 --- /dev/null +++ b/xriclcrosslingualretrievalaugmentedincontextlearningforcrosslingualtexttosqlsemanticparsing/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4d19940e2b162c1c94bbf046e3038393e70e99c6b24d717ed1115ce786ecda4d +size 355673 diff --git a/yesyesyesproactivedatacollectionforaclrollingreviewandbeyond/05e50ea2-3232-4eed-a3b8-32b9012628bb_content_list.json b/yesyesyesproactivedatacollectionforaclrollingreviewandbeyond/05e50ea2-3232-4eed-a3b8-32b9012628bb_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..b226187107540642d3f4e8a511b49f8f75753d24 --- /dev/null +++ b/yesyesyesproactivedatacollectionforaclrollingreviewandbeyond/05e50ea2-3232-4eed-a3b8-32b9012628bb_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4d142433c3e385594fedab95fdd4f3ad43653a934fd374046c16987aff2fa1c3 +size 117619 diff --git a/yesyesyesproactivedatacollectionforaclrollingreviewandbeyond/05e50ea2-3232-4eed-a3b8-32b9012628bb_model.json b/yesyesyesproactivedatacollectionforaclrollingreviewandbeyond/05e50ea2-3232-4eed-a3b8-32b9012628bb_model.json new file mode 100644 index 0000000000000000000000000000000000000000..5b4f572363a79f99ea95906922a21ea1f22a9806 --- /dev/null +++ b/yesyesyesproactivedatacollectionforaclrollingreviewandbeyond/05e50ea2-3232-4eed-a3b8-32b9012628bb_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dee8cfcf8d2eb74235cc9d1eae67e5682c5149d98c6a35c246caf8dab07f3e8f +size 148509 diff --git a/yesyesyesproactivedatacollectionforaclrollingreviewandbeyond/05e50ea2-3232-4eed-a3b8-32b9012628bb_origin.pdf b/yesyesyesproactivedatacollectionforaclrollingreviewandbeyond/05e50ea2-3232-4eed-a3b8-32b9012628bb_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..39696235c5ea31e6c36684418461b7fea4d7ee03 --- /dev/null +++ b/yesyesyesproactivedatacollectionforaclrollingreviewandbeyond/05e50ea2-3232-4eed-a3b8-32b9012628bb_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:adc012f2034c888bd1e6db1774bc302baa248b1938db91abfaa063e04bbe2742 +size 349354 diff --git a/yesyesyesproactivedatacollectionforaclrollingreviewandbeyond/full.md b/yesyesyesproactivedatacollectionforaclrollingreviewandbeyond/full.md new file mode 100644 index 0000000000000000000000000000000000000000..8ffb7e7b7c2570ade1d51c4bebf58c4d19434ffc --- /dev/null +++ b/yesyesyesproactivedatacollectionforaclrollingreviewandbeyond/full.md @@ -0,0 +1,440 @@ +# Yes-Yes-Yes: Proactive Data Collection for ACL Rolling Review and Beyond + +Nils Dycke*, Ilia Kuznetsov*, Iryna Gurevych +Ubiquitous Knowledge Processing Lab (UKP Lab) +Department of Computer Science and Hessian Center for AI (hessian.AI) +Technical University of Darmstadt +ukp.informatik.tu-darmstadt.de + +# Abstract + +The shift towards publicly available text sources has enabled language processing at unprecedented scale, yet leaves under-serviced the domains where public and openly licensed data is scarce. Proactively collecting text data for research is a viable strategy to address this scarcity, but lacks systematic methodology taking into account the many ethical, legal and confidentiality-related aspects of data collection. Our work presents a case study on proactive data collection in peer review - a challenging and under-resourced NLP domain. We outline ethical and legal desiderata for proactive data collection and introduce "Yes-Yes-Yes", the first donation-based peer reviewing data collection workflow that meets these requirements. We report on the implementation of Yes-Yes-Yes at ACL Rolling Review1 and empirically study the implications of proactive data collection for the dataset size and the biases induced by the donation behavior on the peer reviewing platform. + +# 1 Introduction + +Empirical NLP is shaped by its data sources. While early work mostly considered canonical sources like newswire (Paul and Baker, 1992), the last decade has been marked by the transition to fortuitous (Plank, 2016), found data openly available for collection. The rise of crowdsourcing platforms also made it possible to generate massive amounts of textual data on demand (Bowman et al., 2015; Nadeem et al., 2021), feeding novel NLP developments in both task-specific problem engineering and general-purpose representation learning. + +Yet problems persist with mainstream approaches to data collection. Canonical data is not representative. Not every language variety can be "found", and, by far, not every found text can be used for research (Rogers et al., 2021). Generating + +![](images/ee10cd730f82d35e80a9ef180eda87f2299fd3f60418bc9b0319eacf3b4a5443.jpg) +Figure 1: The Yes-Yes-Yes workflow: data collection is conditioned on participants' consent (1, 3) and on the confidentiality status of the publication (2). Only data that fulfills all conditions is added to public datasets. + +texts "on demand" is prone to artifacts (Gururan-gan et al., 2018) and might be challenging in expert domains, like scientific or clinical text. As a result, existing collection strategies in NLP leave underserved many domains and application scenarios that crucially depend on closed data or data not yet cleared for research use. + +Given the raising attention to the development of specialized NLP applications (Newman-Griffis et al., 2021), the ability to acquire data from the domains of interest on-demand becomes crucial. To that end, a promising but not systematically explored alternative to canonical, found, and generated source data is proactive, targeted data collection, where texts are harvested from a previously closed process in the domain of interest, and made available for research. This requires careful consideration of the text production process in the target domain, as well as many ethical, legal, confidentiality-related and other aspects of data collection. In line with the recent work in responsible + +data handling for NLP (Rogers et al., 2021; Bender and Friedman, 2018) and previous discussions on data collection in the digital mental health domain (Resnik et al., 2021), our work explores practical implications of proactive data collection from a previously under-resourced domain: peer review. + +Peer review is the cornerstone of academic quality control. The ever-increasing submission rates expose the weaknesses of this process, motivating the first generation of computational studies in peer review within and beyond NLP (Kang et al., 2018; Hua et al., 2019; Dycke et al., 2021; Yuan et al., 2022; Stelmakh et al., 2020a, 2021, etc.). Such studies crucially depend on the availability of peer reviewing data. Yet this data is hard to come by and is associated with a range of ethical, confidentiality and copyright issues, and while current methodological advances in peer review processing show great promise, a solid data foundation for the study of peer review is still lacking. This makes peer review an excellent target for proactive data collection. In this work we: + +- outline the challenges and trade-offs associated with the peer reviewing data collection; +- propose Yes-Yes-Yes (3Y) - a generic data collection workflow to address those challenges; +- report on the instantiation of 3Y at ACL Rolling Review (ARR) $^2$ and examine the selectivity, bias and donation behavior in our workflow; +- provide an open implementation of the proposed workflow for any research community that uses OpenReview3. + +We highlight that this work is about data collection methodology and not about the dataset, which is subject to a subsequent study focused on peer review. Here, using peer review as an example domain, our goal is to spark the discussion on systematic approaches to proactive data collection from closed domains in general, by outlining the challenges, proposing the workflow, and discussing the practical implications of ethical data collection. From the perspective of meta-science and scientific policy research, our work addresses the need for evidence-based empirical study of peer review by proposing a workflow for ethical peer reviewing data collection which can be adapted to other communities. We elaborate on the underlying peer review system and publication culture at ARR in + +the appendix A to contextualize our findings within the broader science landscape. + +# 2 Background + +# 2.1 Text Sources in NLP + +Early work in NLP focused on few canonical text collections that were widely reused across studies. Yet, while core linguistic phenomena like POS are present in any domain, the NLP models of these phenomena suffer from domain shift, and as we move towards application-oriented NLP, target phenomena themselves become domain-dependent. The ability to acquire text beyond canonical collections is thus critical for the success of both core and applied NLP. One strong alternative to canonical data is found data that emerges as a side-product of text communication and is readily accessible, e.g. Wikipedia, scientific publications, books, etc. Yet, not every text type can be found, and many specialized and rare discourse types are under-represented in NLP. Moreover, not every found text can be used for research, and an active line of work in NLP is concerned with the ethical, legal and privacy-related aspects of data collection and reuse (Rogers et al., 2021; Bender and Friedman, 2018). While it is possible to avoid some of these challenges by generating text on demand, this approach is prone to artifacts and limited both in scale and in the kinds of texts that can be created this way. + +Openly available texts only constitute a minor fraction of all texts produced. We claim that much of this data can be made available for research via proactive, donation-based data collection. Any restrictions imposed on the data naturally introduce bias. While recent work in NLP lays out the general principles for ethical data collection, the practical workflows that put it to use are missing, and it remains unclear how legal, ethical and other limitations shape the resulting data. Our work aims to bridge this gap by proposing a proactive data collection workflow and analysing its effects on data in the domain of peer reviews. + +# 2.2 Peer Review + +Scholarly peer review is a structured process that involves a range of stakeholders and produces many textual artifacts. A common reviewing campaign involves authors submitting their draft to a reviewing committee represented by editors. The editors distribute the drafts to reviewers who provide their evaluation in form of a report. In an optional revi + +sion stage the authors might communicate with reviewers and update the draft, and editors might produce meta-reviews to help decision-making. Based on the evaluation, the work is accepted or rejected; accepted work is subject to official publication. Peer review is often anonymized: the reviewer identities are hidden from the authors (single-blind), and the author identities might be hidden from the reviewers (double-blind). + +Reviewing quality and efficiency are of paramount importance to maintain the integrity of science. Yet issues persist in both dimensions: reviewers are prone to a range of biases and strategic behaviors (Tomkins et al., 2017; Lee et al., 2013; Stelmakh et al., 2020b, 2021), fall back on superficial heuristics (Rogers and Augenstein, 2020), and reviewing itself takes a lot of time and effort (GSPR, 2018). This motivates computational study of peer review: Kang et al. (2018) introduce PeerRead – a corpus composed of openly available reports and drafts – and report experiments on paper acceptance and aspect score prediction; Hua et al. (2019) investigate argumentation in peer review reports; Cheng et al. (2020) study the correspondences between review reports and author responses; Gao et al. (2019) investigate the effect of rebuttal on evaluation; Dycke et al. (2021) learn to rank papers based on review reports and scores; Yuan et al. (2022) explore fully-automatic review report generation based on submission drafts. + +# 2.3 Status of Existing Peer Reviewing Data + +Computational research in peer review critically depends on the availability of open peer reviewing data. Existing studies in NLP for peer reviews build almost exclusively on two data sources: the reviews from the International Conference on Learning Representations (ICLR) available via the OpenReview platform, and reviews for accepted papers at the Conference on Neural Information Processing Systems (NeurIPS) available via the conference website4. Both ICLR and NeurIPS represent specialist communities in neural network and representation learning research – a narrow sample given the widespread use of peer reviewing across scientific fields. While peer review in NLP and computational linguistics conferences has been previously studied (Kang et al., 2018; Gao et al., 2019), publicly available data is scarce. + +Peer review is a challenging case for data col + +lection. At the time of writing, neither ICLR nor NeurIPS provide information on the authors' and reviewers' consent for processing their peer reviewing data, nor specify the conditions of data processing by third parties. Yet peer reviewing data is personal and confidential, and hereby legally requires consent or other grounds for processing (see Section 3). Publishing of, and attaching license and copyright to peer reviewing data is non-trivial and requires careful consideration of authorship and attribution. Yet none of the published datasets of peer reviews (incl. PeerRead (Kang et al., 2018), AMPERE (Hua et al., 2019), APE (Cheng et al., 2020) and ASAPReview (Yuan et al., 2022)) attach clear license to the source or to the derivative annotated data, rendering the conditions of data re-use unspecified. All in all, the current ad-hoc approach to peer reviewing data collection in NLP shares a range of risks associated with the research use of found data in general, and the lack of standard data collection workflows results in a major overhead for individual data collection efforts. Our work aims to address these issues by proposing a general-purpose workflow for peer reviewing data collection built around a few key data collection principles that we outline next. + +# 3 Problem Dimensions + +We use peer reviewing data as an umbrella term for the artifacts produced during peer review, and limit our discussion here to submission drafts and review reports. We distinguish between metadata (numerical scores, track, paper format, etc.) and textual data, and focus our discussion on the latter. Textual data falls under the EU General Data Protection Regulation (GDPR) definition of personal data as "any information relating to an identified or identifiable natural person"5 - the identities of the reviewers and authors are known to the editors, and remain potentially identifiable based on the writers' professional expertise and via author profiling. Although peer reviewers and authors rarely sign formal non-disclosure agreements, peer reviewing data is confidential. Finally, most of the peer reviewing data is anonymous, and only the editors know the identities of the participants. + +# A. Collection strategy. Personal data requires + +consent or other explicit grounds for processing. Two main approaches to obtaining consent are terms of service (ToS) and donation. ToS is a one-size-fits-all approach that requires the users to agree to the terms in order to use a platform. Establishing universal ToS is a challenge that involves balancing interests of many stakeholder groups, at the risk of losing participants who disagree with data collection. In a donation system, the decision to contribute data is made individually. Although technically more intricate, donation-based collection allows participants who do not wish to contribute to still use the platform. Donation-based approach might introduce participation bias into the data (Keeble et al., 2015; Slonim et al., 2013). + +B. Stakeholder involvement. Who has the authority to consent for what data? This question is not trivial: for example, in peer reviewing data, while a reviewer might agree to publishing their reports, the authors might object, not only due to potential negative reviews, but also due to the risk of leaking unpublished ideas and results prepublication. Ideally one would want all involved stakeholders to consent; yet, increasing the number of involved parties can substantially reduce the amount of collected data and augment the bias. +C. Licensing. Liberal data licensing allows the community to build upon prior data and ensures replicability. Creative commons (CC) is a popular licensing choice for NLP datasets that supports additional restrictions on data sharing, adaption and commercial use. Most CC licenses require attribution - specifying the title, authorship and source of the data. Yet as most of the peer reviewing data is anonymous, it cannot be directly attributed to its authors, and declaring the work public domain (CC0) leaves the data reuse entirely unregulated, incl. commercialization and claiming copyright and attaching restrictive license to data derivatives. +D. Anonymity. During peer review, the identities of the authors are hidden to maintain the objectivity of review; anonymizing the reviewers aims to protect them from potential backlash. Yet, peer review is hard work; and if previously hidden review texts are to be made public as part of a dataset, the authors and reviewers should have an opportunity to be credited for their work, which effectively deanonymizes their contributions. +E. Confidentiality. Modern academia is highly competitive. Review reports often summarize papers in a way that enables a third party to appropri + +ate the idea or to gain advantage due to the knowledge of unpublished results. Professional ethics prevent idea theft via peer review, as the identities of the reviewers are still known to the editors. Yet, if peer reviews are made available to the open public, this is no longer true, and access to unpublished research results presents a confidentiality risk. + +# 4 The Yes-Yes-Yes Workflow + +# 4.1 Design + +The aforementioned problem dimensions inform the design of our proposed workflow. We aim to grant key stakeholders – authors and reviewers – extensive control over their data, while maximizing the value of the resulting data to researchers. Both goals should be attained while ensuring least interference with the peer reviewing campaign and avoiding pressure on the stakeholders. We focus our data collection on drafts and review reports. + +We collect data on a donation basis (Section 3.A) and analyze the resulting participation bias. The primary contributor of the data is always the stakeholder producing the artifact (B.); this means, drafts must be donated by the authors, and the reports by reviewers. The stability and availability of the dataset is crucial for replicability of research results (Zubiaga, 2018; Rogers et al., 2021), but simple consent does not guarantee data persistence, as it can be withdrawn, which would in turn require modification of the underlying data; it is preferable to perform license transfer (C.): as long as the license conditions are met, the license cannot be revoked, and the research dataset remains stable. Reviewers and authors must have an opportunity to explicitly request attribution (D.), with the identity anonymous by default. Finally, to account for the confidentiality (E.), permission to make reports public must be obtained from the authors (also, B.), and only material associated with accepted publications should be publicly released. Reports for which only reviewers opt-in can be subject to research but should not be made publicly accessible. + +# 4.2 Workflow + +Based on these design decisions, we define the Yes-Yes-Yes workflow (3Y-Workflow) for peer reviewing data collection at ARR (Figure 1) as a three-step decision process synchronized with the underlying peer reviewing campaign and applied on per-paper and per-reviewer basis. The workflow yields three possible outcomes: no data col + +lected (default), data added to a protected dataset (potentially available for research, but not public), and data added to a public dataset. In all cases, the resulting data is anonymous unless credit was explicitly requested by data contributors. In the following, we describe each step of the workflow in detail. + +1Y: Yes by the Reviewers. First, each reviewer decides on contributing their reports in the given reviewing campaign. To minimize the communication overhead, reviewers make a decision whether to donate all their reviewing reports, in bulk. To contribute, a reviewer signs a review report license agreement with optional attribution (see A.2). This means that reviewer names are not collected unless they explicitly request this. The donation can be made any time between submission and acceptance decisions. The reviewers are explicitly informed about the risks of future authorship attribution via profiling techniques (see A.3). If the reviewer should not explicitly give the "first Yes", their reports are discarded from the data collection pipeline. Donated reviews become part of the protected dataset and the workflow proceeds. + +2Y: Yes by the Editors. Next, we consider the acceptance decision on the submissions. If editors accept a draft for publication ("second Yes") and its reviewers agreed to donate their peer review reports in the previous step, the workflow continues. The resulting 2Y reviews refer to published papers and thereby do not leak unpublished results. If the paper is not accepted, the reviews remain part of the 1Y protected dataset. + +3Y: Yes by the Authors. Finally, after the acceptance decisions are known, the authors of accepted papers are asked to contribute their drafts via a paper license agreement (see A.4), as well as allow publication of the associated review reports from the reviewers that gave their "Yes" in the first step. By default, the authors opt-out all reviewing data associated with their draft (no draft and no reviews included). If they choose so, they can either donate just the paper draft or both the draft and its associated reviews. If the authors donate their data ("third Yes"), the draft and its donated review reports become eligible for the public dataset, 3Y. Otherwise, previously donated review reports for accepted papers remain in the protected 2Y dataset. + +Protected data in the 1Y and 2Y datasets is confidential by design: it comprises anonymous review reports and metadata of agreeing reviewers + +(1Y) of both accepted and rejected papers (2Y). Due to confidentiality, it may be used to calculate statistics or to quantify biases, but cannot be made public. In our analysis below, we solely rely on nonsensitive numerical statistics from the protected data, and in the current implementation of the 3Y-Workflow at ARR, protected data is not collected, see Section 7 for discussion. + +# 5 Implementation at ARR + +# 5.1 ACL Rolling Review + +ACL Rolling Review (ARR) is an initiative in the ACL community that decouples peer review from publication and replaces the traditional, per-event reviewing campaigns with a single, journal-style reviewing process. ARR was launched in May 2021 and serves as the main reviewing platform for multiple major ACL conferences, including the Annual Meeting of the $\mathrm{ACL^6}$ . ARR operates in monthly cycles: during each cycle, the authors might submit their work to ARR; the draft is evaluated by reviewers; based on the evaluation, action editors decide whether the draft has passed peer review. If the evaluation is positive, the draft can be committed to one of the ACL conferences where program chairs (equivalent to editors) make the final decision to publish the work. If a draft is not accepted at a conference, it can be revised and resubmitted to ARR in next iterations. For the 3Y-Workflow, only the publication decision by the program chairs is relevant. + +ARR presents a unique opportunity for the study of peer review in the ACL community and beyond. The ever-increasing submission rates at ACL provide a steady source of reviewing data, and unified reviewing workflow, protocols and forms minimize the effects of a particular reviewing campaign configuration on the process. Consequently, ARR is also highly suitable for the experimental study of proactive data collection in general, as we can vary collection configurations over time within a mostly fixed context of source data generation. In addition, the use of the open-source OpenReview platform makes it easy to automate many aspects of data collection, from sending out reminders to secure data filtering. With kind permission and support by the editors-in-chief and the technical team, we have implemented the 3Y-Workflow at ARR. + +# 5.2 Implementation Details + +To minimize interference of the data collection with the peer reviewing campaign, our implementation relies on the built-in Task feature of OpenReview: optional data donation is seamlessly integrated as part of the reviewing process along with other tasks, like review submission. To enable future research on the collected data while preventing uncontrolled re-use and redistribution, we attach the Creative Commons BY-NC-SA 4.0 License to the data which allows future users to share and adapt the data as long as it is attributed (BY), only used non-commercially (NC) and is shared under the same licensing conditions (share-alike, SA) $^7$ . To avoid the pitfall of reviewing data being non-attributable due to anonymity, we ask the contributors to perform license transfer in which the copyright for the data is transferred to the ACL (similar to ACL Anthology $^8$ publications), while the data creators might still get attributed if they explicitly wish to reveal their identity. + +To maintain the confidentiality of unpublished work, we currently opt to make public exclusively the peer reviewing data for papers that are later accepted at a venue and officially published. Our implementation of the 3Y-Workflow is openly available, making the data extraction code base transparent and allowing to easily set up the workflow for any new OpenReview-based reviewing campaign independent of the venue or research field. + +# 6 Analysis + +With the implementation of the proposed workflow at ARR, we can study the effects of donation-based proactive data collection on the resulting dataset composition, as well as the donation behavior across the users of the platform. For our analysis, we focus on the publications that were accepted at the 60th Annual Meeting of the ACL (ACL-2022) – the first major conference to employ ARR as its main reviewing platform. As over $98\%$ of the papers accepted at ACL-2022 were submitted to September, October and November 2021 cycles of ARR, we consider those as the full dataset (ARR-all). We then focus the analysis on the metadata of the donated peer review reports from the subsets ARR-1Y (reviewers agree), -2Y (paper accepted) and -3Y (authors agree). For context, we + +bring in two prior datasets in the NLP domain. The ACL-2018 (Gao et al., 2019) was collected during the 56th Annual Meeting of the ACL; consent collection during the peer reviewing process and the lack of license transfer prevented the public release of the full dataset. The ACL-2017 portion of the PeerRead corpus (Kang et al., 2018) includes reviews of submissions to the 55th Annual Meeting of ACL, for which both authors and reviewers agreed to share the review reports. + +# 6.1 Selectivity of Decisions + +Donation-based data collection inevitably is selective. We compare the collected data to limited statistics derived from the complete data (ARR-all). Aggregate metadata is generally not considered private and – despite the lack of explicit consent – it is widely used for conference reporting9. Still, we received explicit one-time permission from the ARR editors-in-chief to obtain numerical statistics from ARR-all. These statistics were computed without access to identifiers of submissions, reviews or reviewers, posing no privacy risk during their creation. + +As Table 1 shows, ARR-all already exceeds ACL-2018 by size, and although not all of this data is donated, the public data (ARR-3Y) already includes more reviews than the ACL-2017 subset of PeerRead. ARR-1Y covers approximately half of the reviews in ARR-all; each further decision in the workflow reduces the number of reviews (i.e. in ARR-2Y and ARR-1Y) by a third, rendering the process highly selective. This confirms the anticipated limitation of proactive data collection based on multi-stakeholder decision making. Yet, as the workflow is continuously applied at further conferences and ARR cycles, the ARR-3Y subset of the data is likely to outgrow ACL-2018 and other existing datasets in course of time. + +Dataset bias towards individual data creators has recently gained attention in literature (Bandy and Vincent, 2021). Even without explicit authorship information, aggregate statistics allow us to study diversity in our peer reviewing data. The total number of unique reviewers across cycles is not available for ARR-all, as each cycle is managed as a separate reviewing campaign: in other words, if an individual reviewed at ARR in September and October, they would be counted twice. Hence, Ta + +
    Our CollectionPrevious Work
    ARR-allARR-1YARR-2YARR-3YACL-2018ACL-2017PR
    # Submissions359128849232351528137
    # Reviews11621565618154633875275
    # Reviewers4421 ↓191610733881213-
    # Reviews per submission3.24*1.96±0.851.97±0.871.97 ± 0.902.52 ± 0.67-
    # Reviews per reviewer2.63 ↑2.95±1.661.69±0.861.19 ± 0.433.04 ± 1.35-
    + +Table 1: Statistics of donated (ARR-{1, 2, 3}Y) and all (ARR-all) reviews for September, October, November 2021. Reviewer statistics for ACL-2018 from Dycke et al. (2021); statistics on ACL-2017 portion of the PeerRead corpus from Kang et al. (2018). $\downarrow$ upper bound, $\uparrow$ lower bound, * estimated from total counts. + +ble 1 reports an upper bound $(\downarrow)$ of the number of reviewers and a lower bound for the number of reviews per reviewer $(\uparrow)$ for ARR-all. As our results in Table 1 show, the number of reviews per submission remains nearly constant throughout the decision steps, as reviews are sub-selected on a per-submission basis. However, the number of reviews per reviewer drops notably from ARR-1Y to ARR-3Y; while this limits analyses on a per-reviewer basis, this also shows that reviews in ARR-3Y originate from many different creators. + +# 6.2 Polarity Bias + +The decision to donate data is likely to correlate with a range of external factors. In peer review, one such potential factor is the review polarity – both from the reviewer and from the author side. Figure 2 compares the distribution of overall scores in reviews of ARR-{1, 2, 3}Y and ARR-all. As it shows, the donated reviews cover a wide range of ratings, with the prevalent overall score around 3 ("good"). The distribution of overall scores in ARR-1Y is nearly identical to ARR-all and resembles other computer science conferences (Ragone et al., 2013) and ACL-2018 (Gao et al., 2019). Thus, the reviewers' decision to donate data does not introduce substantial polarity bias. Yet, acceptance decision correlates with high scores, and for the roughly $25\%$ of submissions that are accepted (2Y) we observe a polarity bias. The final donation decision by authors of accepted papers (3Y) introduces no substantial further bias towards positive reviews; the review scores in ARR-3Y are only marginally more skewed towards positive scores than ARR-2Y with the most frequent score still at 3.5 ("good"/"strong"). Thereby, we can conclude + +that the conditioning on paper acceptance is the major source of polarity bias in 3Y-Workflow. + +Considering the interconnected nature of peer reviewing data, both reviewers and authors might base their donation decisions on all review scores of a submission rather than isolated reviews. To account for this, we study the bias towards papers with homogeneous ratings. The agreement on overall scores by Krippendorff's $\alpha$ with ordinal metric (Krippendorff, 1980) lies at 0.24 for ARR-1Y, considerably lower than 0.34 for ACL-2018 (Dycke et al., 2021); reviews for submissions with controversial ratings do seem to be donated, even when taking into account the different score scales of ACL-2018 and ARR. To investigate further, we consider the average standard deviation of overall scores per paper; if the overall scores per paper are homogeneous, this value is closer to zero. We observe only small differences between ARR-1Y (0.259), ARR-2Y (0.253) and ARR-3Y (0.257), showing that the distribution of negative and positive reviews to papers is similar across subsets. + +# 6.3 Donation Behavior + +A donation-based data collection workflow crucially depends on the stakeholders' participation, and we conclude our analysis with a brief overview of our observations related to donation behavior. Over the course of the considered three months, 2147 responses to the donation request were collected from 4138 active reviewers (each cycle of ARR treated individually). Among these responses $6.33\%$ explicitly disagreed to data collection, while the rest agreed. While the majority of the reviewers prefer to stay anonymous, a strong minority of $37.49\%$ requested attribution, showing the de + +![](images/49233390d5fd4c04eaedb1fa799bad6541c7be9b98a14de19a2ac94fef380872.jpg) +Figure 2: Overall score distribution in ARR-{1, 2, 3}Y and ARR-all reviews. + +mand for getting credit for peer reviewing work. + +By the implementation of the 3Y-Workflow at ARR, reviewers are free to sign the agreement before, during or after writing their review reports. Interestingly, $43.9\%$ of donating reviewers contributed their data before submitting their first review report of a cycle, while $40.47\%$ agree after their last reviewing report. This justifies leaving the decision timing up to reviewers and suggests that the decision for donation is only weakly influenced by the outcome of the review. + +Turning to the authors' participation, $29.53\%$ of the 999 accepted paper drafts were donated, of which $87.79\%$ of the authors also agreed to the publication of the associated peer reviews. On the other hand, $37.34\%$ of authors explicitly disagreed to the collection. We note that despite the paper acceptance, author participation is nearly two times lower than reviewers': while private feedback from some of the authors revealed concerns over unfair negative reviews being published, the overall response rate of roughly $67\%$ suggests that better community engagement into collection would lead to higher participation and contribution rates. + +# 7 Discussion + +3Y-Workflow is an example of proactive data collection; unlike found data, it allows the collector to interact with the text authors and clears the data for research use. The workflow lends itself to automation: apart from a few manual operations due to the current technical limitations of ARR, data collection can run continuously with minimal supervision, resulting in a steady supply of peer re + +viewing data from the NLP community, with the first data release currently in preparation. The open implementation allows adapting the workflow to other research communities at OpenReview. + +Based on our experience, we determine four main directions for the future studies in proactive data collection. Developing better approaches to stimulate participation is crucial for fast dataset growth: this includes ensuring high visibility of the data collection effort, high transparency of the process, and deeper integration of data collection into reviewing process without interfering with the process itself. Our pilot investigation of bias in donation-based review collection revealed that some steps of the workflow indeed introduce polarity bias – yet it remains unclear whether it affects NLP applications and what other biases are present. The study of bias requires access to the full reviewing data – while we used anonymous numerical data from 1Y and 2Y as a proxy, deeper analysis would require access to review texts. While the 3Y-Workflow allows collection of protected data in theory (e.g. consented-to peer review texts blocked by the paper authors), ensuring fair access to protected data presents a technical, administrative and legal challenge: who provides and maintains the infrastructure for safe storage of the protected data? what can be done with this data? who can access the data and how is the access regulated? what are the consequences of protected data misuse and who would enforce them? A further community discussion is necessary to address those and other challenges and make protected data available for research. + +Finally, from the legal perspective the replicability of research based on 3Y data is ensured via license transfer to a data controller – in our case, the ACL. This approach has many advantages over publishing unlicensed data, but requires an external license holder and does not have a formal mechanism to ensure data withdrawal from derivative datasets without compromising replicability. The search for alternative legal and technical frameworks for publishing peer reviewing data constitutes another promising avenue for future studies. + +Double-blind pre-publication peer review, as done at ARR, is an especially hard case for proactive data collection due to the confidentiality and anonymity. Not all proposed measures will be relevant for any proactive data collection campaign, even in peer review – for example, other venues + +hosted by OpenReview make both accepted and rejected publications available, invalidating the confidentiality concern and rendering the 2Y step of our workflow redundant. Yet many of our findings in peer reviewing data collection at ARR point at the potential gaps in the proactive data collection methodology in general. Unlike canonical, found and generated data, proactive data collection demands systematic study of contribution behavior by the content creators. As most of the data to be collected proactively is user-generated personal data, solid ethical and legal frameworks are required to support its research use. As some of the data can never be made public, technical solutions for secure access to protected data are urgently needed to enable the study of bias in the new domains. + +# 8 Conclusion + +Given that most NLP tasks are either prone to domain shift, or are specific to a particular domain, or both, the future of NLP crucially depends on the field's ability to acquire new sources of textual data. We have presented "Yes-yes-yes", the first ethically sound, consent-driven data collection process for peer reviewing data and reported on its implementation at the ACL Rolling Review. We have further analysed the effects of ethical data collection strategies on dataset composition and found that different steps of our proposed data collection workflow indeed have quantifiable, systematic effects on the data. Yet, many questions remain open, from strategies for better community engagement, to the technicalities related to replicability and archival of the data. We hope that our study sparks the systematic discussion on proactive data collection strategies as a viable alternative to canonical and found data in NLP, in the domain of peer reviewing, and beyond. + +# Limitations + +In this work we have addressed proactive data collection in the peer reviewing domain. While our focus on a particular domain, research community and reviewing workflow made our study feasible in the first place, it inevitably limits the generality of our findings. A systematic study of donation behavior and data collection workflows would allow to evaluate our findings in the general case. Such systematic study requires us to determine parameters that can influence the outcome of data collection, and we outline some of them below. + +In the peer reviewing domain, our study is limited in several important ways. First, we consider a specific, double-blind pre-publication workflow – yet alternative workflows exist, incl. single-blind review, open review and post-publication review, and we expect the parameters of the peer reviewing process to interfere with the willingness to donate the data, with the stance towards anonymity, and with requirements towards confidentiality. Second, we investigate a particular research community in natural language processing – yet peer review is used in most fields of science that differ in terms of their reviewing and community culture. Liberal licensing of the final publications at ACL facilitates data collection as submission drafts published under an open license do not conflict with the final publication conditions. Yet, fields with more restrictive publication an dissemination standards would need to find a compromise between producing open research data and restrictive, potentially paid distribution of the final publications. ACL Rolling Review is a continuous reviewing process – meaning that the data size, and thereby participation rate – are of secondary importance, as the necessary amount of data can be collected over time. Yet, when tackling one-time reviewing campaigns, additional work needs to be invested to maximize participation. + +In context of proactive data collection in general, our findings would need further validation in other application scenarios and communities. We now briefly review the main ideas and findings of our work and discuss the extent to which they apply generally. We believe the proposed problem dimensions (Section 3) to apply to a wide range of scenarios outside peer reviewing domain. We deem our core design principles (Section 4.1) applicable to a wide range of settings in peer review and beyond – yet it is imaginable that additional constraints emerge in new domains, e.g. when working with sensitive personal data. Our specific data collection workflow (Section 4.2) is tailored towards peer reviewing at ARR – we expect it to be applicable to most peer reviewing campaigns with small modifications, yet it might need substantial adaptation for application outside peer review, as the text production process and the requirements change. With respect to our main findings (Section 6), we expect high selectivity to hold in most non-open peer reviewing environments – yet our experimental setup does not allow us to distinguish + +between the explicit refusal to contribute and the gaps in outreach. Regarding bias, we find that the reviewers' decision to donate the data does not substantially affect the score distribution – which can be attributed to reviewers donating (or not donating) all their reviews in bulk, the strategy that we employed to simplify the data collection setup. We note that metadata that allows to study bias is a limiting assumption and might not be available in other data collection scenarios. We believe the score bias introduced by the filtering based on paper acceptance to be persistent in all datasets of peer reviews. The importance of this bias to NLP applications remains an open question. We believe demographics and the culture of a particular user community, as well as timing and complexity of the contribution process to be major factors in data donation, and see studies in data donation behavior as a viable future research direction. + +# Ethics Statement + +Our work directly contributes to the discussion of responsible data handling in NLP. While this paper does not introduce a dataset, the proposed workflow is designed to generate data that adheres to the current state of the art in handling personal data, liberal data licensing, as well as addresses the issues related to data confidentiality and anonymity. We believe that our discussion can be further refined, especially with respect to post-publication data withdrawal and protected data handling, and leave this for future studies. We believe the resulting data to be representative of its source population, namely the ACL community, to the degree that particular demographic characteristics systematically contribute to the willingness to donate the data. No annotators have been employed in production of the dataset and no demographic characteristics are collected or used in the study. For protected portions of the data, we only extract non-textual score statistics and do not utilize draft or review texts in any of the experiments. While this work does not come with a new dataset, we take additional consideration in treating anonymity and confidentiality of the contributed texts to minimize the possible harm from the future dataset use. Given the availability of other datasets of peer reviews, we do not believe that our data source introduces new, potentially harmful applications of NLP in the peer reviewing domain. Instead, it promotes fair, consent-based data collection that should enable reproducible and + +ethically sound NLP for peer review processing in the future. With the field shifting towards responsible handling of data, we deem it crucial to realistically highlight the practical implications of ethical data handling on the underlying datasets to guide future research in ethical data collection in NLP. + +# Acknowledgements + +We express our sincere gratitude to all parties providing support and advice during the realization of the peer review data collection at ACL Rolling Review. The data collection would not have been possible without the discussion and approval by the ACL Committee on Reviewing in 2021 chaired by Hinrich Schütze, and we are grateful to everyone who reached out to us during the pilot stages of the project to make suggestions and express their concerns – some of which we could address, others left for the future iterations of the 3Y-Workflow. We thank the editors-in-chief and the technical team of ACL Rolling Review for their support during this ongoing data collection effort; with a special thanks to Amanda Stent and Sebastian Riedel. Finally, we thank Dorothy Deng for legal counseling and the specification of the review report and paper draft license agreement texts. + +This research work is part of the InterText initiative $^{10}$ at the UKP Lab. It has been funded by the German Federal Ministry of Education and Research and the Hessian Ministry of Higher Education, Research, Science and the Arts within their joint support of the National Research Center for Applied Cybersecurity ATHENE. + +# References + +Jack Bandy and Nicholas Vincent. 2021. Addressing "documentation debt" in machine learning research: A retrospective datasheet for bookcorpus. ArXiv, abs/2105.05241. +Emily M Bender and Batya Friedman. 2018. Data statements for natural language processing: Toward mitigating system bias and enabling better science. Transactions of the Association for Computational Linguistics, 6:587-604. +Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages + +632-642, Lisbon, Portugal. Association for Computational Linguistics. +Liying Cheng, Lidong Bing, Qian Yu, Wei Lu, and Luo Si. 2020. APE: Argument pair extraction from peer review and rebuttal via multi-task learning. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7000-7011, Online. ACL. +Nils Dycke, Edwin Simpson, Ilia Kuznetsov, and Iryna Gurevych. 2021. Assisting decision making in scholarly peer review: A preference learning perspective. arXiv preprint arXiv:2109.01190. +Yang Gao, Steffen Eger, Ilia Kuznetsov, Iryna Gurevych, and Yusuke Miyao. 2019. Does my rebuttal matter? insights from a major nlp conference. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1274-1290. +GSPR. 2018. Global State of Peer Review 2018. Wellington: Publons. +Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A. Smith. 2018. Annotation artifacts in natural language inference data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 107-112, New Orleans, Louisiana. Association for Computational Linguistics. +Xinyu Hua, Mitko Nikolov, Nikhil Badugu, and Lu Wang. 2019. Argument mining for understanding peer reviews. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2131-2137, Minneapolis, Minnesota. ACL. +Dongyeop Kang, Waleed Ammar, Bhavana Dalvi, Madeleine van Zuylen, Sebastian Kohlmeier, Eduard Hovy, and Roy Schwartz. 2018. A dataset of peer reviews (peerread): Collection, insights and nlp applications. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1647-1661. +Claire Keeble, Graham Richard Law, Stuart Barber, Paul D Baxter, et al. 2015. Choosing a method to reduce selection bias: a tool for researchers. *Open Journal of Epidemiology*, 5(3):155-162. +K. Krippendorff. 1980. Content Analysis: An Introduction To Its Methodology. Sage Publications, Beverly Hills. + +Carole J. Lee, Cassidy R. Sugimoto, Guo Zhang, and Blaise Cronin. 2013. Bias in peer review. Journal of the American Society for Information Science and Technology, 64(1):2-17. +Moin Nadeem, Anna Bethke, and Siva Reddy. 2021. StereoSet: Measuring stereotypical bias in pretrained language models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5356-5371, Online. Association for Computational Linguistics. +Denis Newman-Griffis, Jill Fain Lehman, Carolyn Rosé, and Harry Hochheiser. 2021. Translational NLP: A new paradigm and general principles for natural language processing research. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4125-4138, Online. Association for Computational Linguistics. +D. B. Paul and J. M. Baker. 1992. The design for the Wall Street Journal-based CSR corpus. In Proceedings IEEE International Conference on Acoustics, Speech and Signal Processing, pages 899-902, Banff. +Barbara Plank. 2016. What to do about non-standard (or non-canonical) language in nlp. arXiv:1608.07836. +Azzurra Ragone, Katsiaryna Mirylenka, Fabio Casati, and Maurizio Marchese. 2013. On peer review in computer science: Analysis of its effectiveness and suggestions for improvement. Scientometrics, 97(2):317-356. +Philip Resnik, April Foreman, Michelle Kuchuk, Katherine Musacchio Schafer, and Beau Pinkham. 2021. Naturally occurring language as a source of evidence in suicide prevention. *Suicide Life Threat*. Behav., 51(1):88-96. +Anna Rogers and Isabelle Augenstein. 2020. What can we do to improve peer review in NLP? In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1256-1262, Online. ACL. +Anna Rogers, Timothy Baldwin, and Kobi Leins. 2021. 'just what do you think you're doing, dave?' a checklist for responsible data use in NLP. In *Findings of the Association for Computational Linguistics: EMNLP* 2021, pages 4821-4833, Punta Cana, Dominican Republic. Association for Computational Linguistics. +Robert Slonim, Carmen Wang, Ellen Garbarino, and Danielle Merrett. 2013. Opting-in: Participation bias in economic experiments. Journal of Economic Behavior & Organization, 90:43-70. +Ivan Stelmakh, Charvi Rastogi, Nihar B Shah, Aarti Singh, and Hal Daumé III. 2020a. A large scale randomized controlled trial on herding in peer-review discussions. arXiv preprint arXiv:2011.15083. + +Ivan Stelmakh, Nihar B Shah, and Aarti Singh. 2020b. Catch me if i can: Detecting strategic behaviour in peer assessment. In ICML Workshop on Incentives in Machine Learning. +Ivan Stelmakh, Nihar B Shah, Aarti Singh, and Hal Daume III. 2021. Prior and prejudice: The novice reviewers' bias against resubmissions in conference peer review. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW1):1-17. +Andrew Tomkins, Min Zhang, and William D. Heavlin. 2017. Reviewer bias in single- versus double-blind peer review. Proceedings of the National Academy of Sciences, 114(48):12708-12713. +Weizhe Yuan, Pengfei Liu, and Graham Neubig. 2022. Can we automate scientific reviewing? Journal of Artificial Intelligence Research, 75:171-212. +Arkaitz Zubiaga. 2018. A longitudinal assessment of the persistence of twitter datasets. Journal of the Association for Information Science and Technology, 69(8):974-984. + +# A Appendix + +# A.1 ACL Publication Norms + +To contextualize our findings and enable the application of our workflow in other research fields, here we report key information on the publication norms in the Association for Computational Linguistics (ACL) community and its conferences, in aggregate referred to as *ACL. + +ACL Conferences ACL is a major, world-wide professional organisation in the field of Computational Linguistics and Natural Language Processing. Following a general trend in machine learning and NLP, the ACL community uses fast-paced conference-based publishing, with some conference publications attaining similar level of prestige, visibility and impact as journal articles. ACL holds regular meetings worldwide, attracting thousands of submissions in cutting-edge NLP. *ACL conferences include but are not limited to the main ACL conference, regional chapters (AACL, EACL, NAACL) and the conference on Empirical Methods in Natural Language Processing (EMNLP). Since 1979 the main ACL conference have been held annually11. + +Peer Review at *ACL *ACL conferences are competitive, and the explosive growth in submission rates puts a strain on the peer reviewing processes at *ACL. As a dynamic, multi-disciplinary, fast-growing field, NLP faces many challenges related to peer reviewing, incl. bias, mixed review quality, and high effort associated with reviewing. Systematic, evidence-based study of peer review requires data. While *ACL conferences produce large amounts of peer reviewing data, peer review at *ACL is double-blind, and in the past peer reviews have not been published systematically. + +ACL Rolling Review Until ACL Rolling Review (ARR) has been released as a pilot project in 2021, each *ACL conference employed an individual peer reviewing campaign. ARR introduced a unified journal-style reviewing workflow aiming to reduce reviewing overhead in the community: submissions are reviewed and then accepted, rejected or resubmitted to the next ARR iteration. Once a paper passes peer review, it can be committed to be published and presented at a conference of choice; the conference program chairs make the final decision whether the manuscript is published + +at the selected venue. The editorial peer reviewing process for the monthly iterations at ARR is a four-step procedure: + +1. The editors-in-chief desk reject papers violating policies on formatting or anonymization. +2. Each paper is assigned to an action editor, who manages the reviewers for each of their assigned papers and makes the final recommendation. +3. Each paper is assigned three to four reviewers based on a matching score from their researcher profile, but without bidding12. +4. After the reviewers submitted their reports, the action editors make a decision for or against the revision of the paper. + +Norms of Paper Writing at *ACL Paper submissions adhere to a standardized IATEX-template and typically consist of eight pages for long papers and four pages for short papers. This ensures consistent formatting of the paper drafts. The language of the drafts is exclusively English, yet some conferences in the past encouraged additional submission of abstracts in other languages. Within the NLP community various paper types are distinguished, which are subject to different styles of reviewing[13]; some examples include resource papers, position papers or method papers. + +**Acl. Review Writing and Peer Reviewing** +**Acl. Writing and Reviewing** +**Acl. Writing and Reviewing** +**Acl. Reviewing** +**Acl. Writing and Reviewing** +**Acl. Writing and Reviewing** +**Acl. Writing and Reviewing** +**Acl. Writing and Reviewing** +**Acl. Writing and Reviewing** +**Acl. Writing and Reviewing** +**Acl. Writing and Reviewing** +**Acl. Writing and Reviewing** +**Acl. Writing and Reviewing** +**Acl. Writing and Reviewing** +**Acl. Writing at *ACL*: + +- https://aclrollingreview.org/reviewertutorial +- https://acl2017.wordpress.com/2017/02/23/last-minute-reviewing-advice/ +- https://2021.aclweb.org/blog/reviewing-advice/ + +# A.2 Review Report License Agreement + +To add more detail on the license transfer, we report the license agreement for reviewers as used in the 3Y-Workflow at ARR (status May 2022). We underline passages that benefit from a more informal explanation and discussion provided in a footnote for each passage. We add this commentary based on the interaction with ACL legal counseling to make the given text more easily accessible to a broad audience without legal background; however, we do not claim legal interpretative authority of these passages. Likewise, there are no legal warranties with respect to the re-use of this license agreement in other collection efforts. + +Along with this license agreement, reviewers were presented a disclaimer informing them about the purpose of the license transfer and potential risks of author profiling to infer their identity based on text. + +# Association for Computational Linguistics Peer Reviewer Content License Agreement Name of ACL Conference: cycle name + +Peer Reviewer's Name: reviewer identity + +* Unless the peer reviewer elects to be attributed according to Section 2, the peer reviewer's name will not be identified in connection with publication of the Peer Review Content. If you wish to be attributed, please check this box $\square$ . + +This Peer Reviewer Content License Agreement ("Agreement") is entered into between the Association for Computational Linguistics ("ACL") and the Peer Reviewer listed above in connection with content developed and contributed by Peer Reviewer during the peer review process (referred as "Peer Review Content"). + +In exchange of adequate consideration, ACL and the Peer Reviewer agree as follows: + +1. Grant of License. Peer Reviewer grants ACL a worldwide, irreversible $^{14}$ , and royalty-free license to use the Peer Review Content developed and prepared by Peer Reviewer in connection with the peer review process for the ACL Conference listed above, including + +but not limited to text, review form scores and metadata, charts, graphics, spreadsheets, and any other materials15 according to the following terms: + +(a) For Peer Review Content associated with papers accepted for publication, and subject to the Authors permission, ACL may reproduce, publish, distribute, prepare derivative work, and otherwise make use of the Peer Review Content, and to sub license the Peer Review Content to the public according to terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License $^{16}$ . + +(b) For Peer Review Content associated with papers not accepted for publication, ACL may use the Peer Review Content for internal research $^{17}$ , program analysis, and record-keeping purposes. Notwithstanding the foregoing, the Parties acknowledge and agree that this Agreement does not transfer to ACL the ownership of any proprietary rights $^{18}$ pertaining to the Peer Review Content, and that Peer Review retains respective ownership in and to the Peer Review Content. + +# 2. Attribution and Public Access License. + +(a) The Parties agree that for purpose of administering the public access license, ACL will be identified as the licensor of the Content with the following copyright notice: Copyright © 2022 administered by the Association for Compu + +tational Linguistics (ACL) on behalf of ACL content contributors: ... (list names of peer reviewers who wish to be attributed), and other contributors who wish to remain anonymous. Content displayed on this webpage is made available under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. + +(b) In the event Peer Reviewer intends to modify the attribution displayed in connection with the copyright notice above, ACL will use reasonable efforts to modify19 the copyright notice after receipt of Peer Reviewer's written request. Notwithstanding the foregoing, Peer Reviewer acknowledges and agrees that any modification in connection with attribution will not be retroactively applied. +(c) The Parties understand and acknowledge that the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License is irrevocable once granted unless the licensee breaches the license terms $^{20}$ . + +3. Warranty21. Peer Reviewer represents and warrants that the Content is Peer Reviewer's original work and does not infringe on the proprietary rights of others. Peer Reviewer further warrants that he or she has obtained all necessary permissions from any persons or organizations whose materials are included in the Content, and that the Content includes appropriate citations that give credit to the original sources. + +4. Legal Relationship. The Parties agree that this Agreement is not intended to create any joint venture, partnership, or agency relationship of any kind; and both agree not to contract any obligations in the name of the other. + +Signature: signature, Date: date +Name Typed: name + +# A.3 Disclaimer for Reviewers + +To ensure that reviewers are aware of the risks associated with the donation of their (anonymous) reviewing data, the following disclaimer is presented along with the review report license agreement: + +Your participation is strictly voluntary. By transferring this license you grant ACL the right to distribute the text of your review. In particular, we may include your review text and scores in research datasets without revealing the OpenReview identifier that produced the review. Keep in mind that as with any text, your identity might be approximated using author profiling techniques. Only reviews for accepted papers will be eventually made publicly available. The authors of the papers will have to agree to the release of the textual review data associated with their papers. + +# A.4 Paper License Agreement + +Here, we report on the license agreement for authors as used in the implementation of the 3Y-Workflow at ARR (status May 2022). As in the previous subsection, we underline passages that benefit from a more informal explanation and discussion provided in a footnote for each passage. We focus on parts that deviate from the reviewers' license agreement. + +# Association for Computational Linguistics Blind Submission License Agreement Name + +of ACL Conference: cycle name + +Blind Submission Paper Title: title + +List Authors' Names: author identifiers + +* Authors names will not be shared with the peer reviewers during the peer review process This Blind Submission License Agreement ("Agreement") is entered into between the Association for Computational Linguistics ("ACL") and the Authors listed in connection with Authors' blind submission paper listed above (referred as "Blind Submission Content"). In exchange of adequate consideration, ACL and the Authors agree as follows: + +1. Grant of License. After the peer review process is concluded and upon acceptance of the paper, Authors grant ACL a worldwide, irrevocable, and royalty-free license to use the blind submission paper version $^{22}$ (referred as "Content"). The foregoing license grants ACL the right to reproduce, publish, distribute, prepare derivative work, and otherwise make use of the Content, and to sublicense the Content to the public according to terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Notwithstanding the foregoing, the Parties acknowledge and agree that this Agreement does not transfer to ACL the ownership of any proprietary rights pertaining to the Content, and that the Authors retain their respective ownership in and to the Content. +2. Permission to Publish Peer Reviewers Content. After the peer review process is concluded and upon acceptance of the paper, Authors have the option to grant ACL permission + +to publish peer reviewer's content associated with the Content, which may include text, review form scores and metadata, charts, graphics, spreadsheets, and any other materials developed by peer reviewers in connection with the peer review process. + +Authors grant permission for ACL to publish peer reviewers content +Authors decline to grant permission for ACL to publish peer reviewers content + +# 3. Attribution and Public Access License. + +(a) The Parties agree that for purpose of administering the public access license, ACL will be identified as the licensor of the Content with the following copyright notice: Copyright © 2022 administered by the Association for Computational Linguistics (ACL) on behalf of the authors and content contributors. Content displayed on this webpage is made available under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. +(b) The Parties understand and acknowledge that the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License is irrevocable once granted unless the licensee breaches the license terms. + +4. Effective Date. The grant of license pursuant to Section 1 and permission to publish peer reviewers content pursuant to Section 2 becomes effective in the event Authors' blind submission paper is accepted for publication by ACL. If the blind submission paper is not accepted, the Content and associated peer reviewers content will remain confidential and kept for internal record-keeping purpose only. +5. Warranty. Authors represent and warrant that the Content is Authors' original work and does not infringe on the proprietary rights of others. Authors further warrant that they have obtained all necessary permissions from any persons or organizations whose materials are included in the Content, and that the Content includes appropriate citations that give credit to the original sources. +6. Legal Relationship. The Parties agree that this Agreement is not intended to create any joint + +venture, partnership, or agency relationship of any kind; and both agree not to contract any obligations in the name of the other. + +By selecting 'On behalf of all authors, I agree' below, I confirm that all Authors have agreed to the above terms and that I am authorized to execute this Agreement on their behalf. Optionally, if you wish to transfer the license to the peer reviewing and blind submission data of all previous versions of this paper submitted to ARR, please select 'On behalf of all authors, I agree for all previous versions of this submission'. + +On behalf of all authors, I agree +On behalf of all authors, I do not agree +On behalf of all authors, I agree for this and all + +previous versions of this submission + +Signature signature, Date date + +Name (please print) author's name + +# A.5 Review Form at ARR + +As a reference, we report on the review form used throughout the considered cycles of September, October, November at ARR. The following fields with given descriptions were presented to the reviewers. + +Paper Summary Describe what this paper is about. This should help action editors and area chairs to understand the topic of the work and highlight any possible misunderstandings. Maximum length 20000 characters. + +Summary Of Strengths What are the major reasons to publish this paper at a selective *ACL venue? These could include novel and useful methodology, insightful empirical results or theoretical analysis, clear organization of related literature, or any other reason why interested readers of *ACL papers may find the paper useful. Maximum length 20000 characters. + +Summary Of Weaknesses What are the concerns that you have about the paper that would cause you to favor prioritizing other high-quality papers that are also under consideration for publication? These could include concerns about correctness of the results or argumentation, limited perceived impact of the methods or findings (note that impact can be significant both in broad or in narrow sub-fields), lack of clarity in exposition, or any other reason why interested readers of *ACL papers may gain less from this paper than they would from other papers under consideration. Where possible, please number your concerns so authors may respond to them individually. Maximum length 20000 characters. + +Comments, Suggestions And Typos If you have any comments to the authors about how they may improve their paper, other than addressing the concerns above, please list them here. Maximum length 20000 characters. + +# Overall Assessment + +- $5 =$ Top-Notch: This paper has great merit, and easily warrants acceptance in a *ACL toptier venue. +4.5 +- $4 =$ Strong: This paper is of significant interest (for broad or narrow sub-communities), and warrants acceptance in a top-tier *ACL venue if space allows. +3.5 + +- $3 =$ Good: This paper is of interest to the *ACL audience and could be published, but might not be appropriate for a top-tier publication venue. It would likely be a strong paper in a suitable workshop. +2.5 +- $2 =$ Borderline: This paper has some merit, but also significant flaws. It does not warrant publication at top-tier venues, but might still be a good pick for workshops. +1.5 +- $1 =$ Poor: This paper has significant flaws, and I would argue against publishing it at any *ACL venue. + +# Confidence + +- $5 =$ Positive that my evaluation is correct. I read the paper very carefully and am familiar with related work. +- $4 =$ Quite sure. I tried to check the important points carefully. It's unlikely, though conceivable, that I missed something that should affect my ratings. +- $3 =$ Pretty sure, but there's a chance I missed something. Although I have a good feel for this area in general, I did not carefully check the paper's details, e.g., the math or experimental design. +- $2 =$ Willing to defend my evaluation, but it is fairly likely that I missed some details, didn't understand some central points, or can't be sure about the novelty of the work. +- $1 =$ Not my area, or paper is very hard to understand. My evaluation is just an educated guess. + +Best Paper Could this be a best paper in a top-tier *ACL venue? + +Yes +- Maybe +No + +Best Paper Justification If the answer on best paper potential is Yes or Maybe, please justify your decision. + +Replicability Will members of the ACL community be able to reproduce or verify the results in this paper? + +- $5 =$ They could easily reproduce the results. +- $4 =$ They could mostly reproduce the results, but there may be some variation because of sample variance or minor variations in their + +interpretation of the protocol or method. + +- $3 =$ They could reproduce the results with some difficulty. The settings of parameters are underspecified or subjectively determined, and/or the training/evaluation data are not widely available. +- $2 =$ They would be hard pressed to reproduce the results: The contribution depends on data that are simply not available outside the author's institution or consortium and/or not enough details are provided. +- $1 =$ They would not be able to reproduce the results here no matter how hard they tried. + +Datasets If the authors state (in anonymous fashion) that datasets will be released, how valuable will they be to others? + +- $5 =$ Enabling: The newly released datasets should affect other people's choice of research or development projects to undertake. +- $4 =$ Useful: I would recommend the new datasets to other researchers or developers for their ongoing work. +- $3 =$ Potentially useful: Someone might find the new datasets useful for their work. +- $2 =$ Documentary: The new datasets will be useful to study or replicate the reported research, although for other purposes they may have limited interest or limited usability. (Still a positive rating) +- $1 =$ No usable datasets submitted. + +Software If the authors state (in anonymous fashion) that their software will be available, how valuable will it be to others? + +- $5 =$ Enabling: The newly released software should affect other people's choice of research or development projects to undertake. +- $4 =$ Useful: I would recommend the new software to other researchers or developers for their ongoing work. +- $3 =$ Potentially useful: Someone might find the new software useful for their work. +- $2 =$ Documentary: The new software will be useful to study or replicate the reported research, although for other purposes it may have limited interest or limited usability. (Still a positive rating) +- $1 =$ No usable software released. + +Author Identity Guess Do you know the author identity or have an educated guess? + +- $5 =$ From a violation of the anonymity-window or other double-blind-submission rules, I know/can guess at least one author's name. +- $4 =$ From an allowed pre-existing preprint or workshop paper, I know/can guess at least one author's name. +- $3 =$ From the contents of the submission itself, I know/can guess at least one author's name. +- $2 =$ From social media/a talk/other informal communication, I know/can guess at least one author's name. +- $1 = \mathrm{I}$ do not have even an educated guess about author identity. + +Ethical Concerns Independent of your judgement of the quality of the work, please review the ACL code of ethics (https://www.aclweb.org/portal/content/acl-code-ethics) and list any ethical concerns related to this paper. Maximum length 10000 characters. \ No newline at end of file diff --git a/yesyesyesproactivedatacollectionforaclrollingreviewandbeyond/images.zip b/yesyesyesproactivedatacollectionforaclrollingreviewandbeyond/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..408d195b37de01e1daa9a30356d6313ae6b3e50b --- /dev/null +++ b/yesyesyesproactivedatacollectionforaclrollingreviewandbeyond/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e4f487a7e2c65112f79191caa570532dd4298658bb04b116daf0b416d1fa4134 +size 96609 diff --git a/yesyesyesproactivedatacollectionforaclrollingreviewandbeyond/layout.json b/yesyesyesproactivedatacollectionforaclrollingreviewandbeyond/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..ee64a23ddaf0359cc64b3695a2c1b75b612a3032 --- /dev/null +++ b/yesyesyesproactivedatacollectionforaclrollingreviewandbeyond/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f90eace844c20cb3c6f854272a5c16392e774e02c9d0cb391399ab1e99335439 +size 501485 diff --git a/youaremytypetypeembeddingsforpretrainedlanguagemodels/423f7dcc-00cf-479f-80ee-be608c353950_content_list.json b/youaremytypetypeembeddingsforpretrainedlanguagemodels/423f7dcc-00cf-479f-80ee-be608c353950_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..03ce3d69e33774909e0b18e5e358f5acfbdd054f --- /dev/null +++ b/youaremytypetypeembeddingsforpretrainedlanguagemodels/423f7dcc-00cf-479f-80ee-be608c353950_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3db608fa7fe9b72cdf501fc00b847d5dd765b97eb32d8216292afe9be72078b4 +size 116405 diff --git a/youaremytypetypeembeddingsforpretrainedlanguagemodels/423f7dcc-00cf-479f-80ee-be608c353950_model.json b/youaremytypetypeembeddingsforpretrainedlanguagemodels/423f7dcc-00cf-479f-80ee-be608c353950_model.json new file mode 100644 index 0000000000000000000000000000000000000000..947b4260c0a88a10b828273269341889e32afffa --- /dev/null +++ b/youaremytypetypeembeddingsforpretrainedlanguagemodels/423f7dcc-00cf-479f-80ee-be608c353950_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c243919d7f909cded7ef6d74b02b0fa399e9a3193ea922f2769d2ec04e194aa8 +size 134579 diff --git a/youaremytypetypeembeddingsforpretrainedlanguagemodels/423f7dcc-00cf-479f-80ee-be608c353950_origin.pdf b/youaremytypetypeembeddingsforpretrainedlanguagemodels/423f7dcc-00cf-479f-80ee-be608c353950_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ccae481b01a5d3b70517a39c8e70b9d5683afabb --- /dev/null +++ b/youaremytypetypeembeddingsforpretrainedlanguagemodels/423f7dcc-00cf-479f-80ee-be608c353950_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ec350803bc933bb26cfd538cdd1bbf6c4d422835425bce1157affb28b6d58a44 +size 1459908 diff --git a/youaremytypetypeembeddingsforpretrainedlanguagemodels/full.md b/youaremytypetypeembeddingsforpretrainedlanguagemodels/full.md new file mode 100644 index 0000000000000000000000000000000000000000..282233034a1d6871e9ab86541ebc8b410ac43622 --- /dev/null +++ b/youaremytypetypeembeddingsforpretrainedlanguagemodels/full.md @@ -0,0 +1,358 @@ +# You Are My Type! Type Embeddings for Pre-trained Language Models + +Mohammed SAEED + +EURECOM + +mohammed.saeed@eurecom.fr + +Paolo PAPOTTI + +EURECOM + +paolo.papotti@eurecom.fr + +# Abstract + +One reason for the positive impact of Pre-trained Language Models (PLMs) in NLP tasks is their ability to encode semantic types, such as 'European City' or 'Woman'. While previous work has analyzed such information in the context of interpretability, it is not clear how to use types to steer the PLM output. For example, in a cloze statement, it is desirable to steer the model to generate a token that satisfies a user-specified type, e.g., predict a date rather than a location. In this work, we introduce Type Embeddings (TEs), an input embedding that promotes desired types in a PLM. Our proposal is to define a type by a small set of word examples. We empirically study the ability of TEs both in representing types and in steering masking predictions without changes to the prompt text in BERT. Finally, using the LAMA datasets, we show how TEs highly improve the precision in extracting facts from PLMs. + +# 1 Introduction + +Pre-trained language models (PLMs) based on transformers (Vaswani et al., 2017) have achieved state-of-the-art results in several downstream NLP tasks (Devlin et al., 2019; Liu et al., 2020). Being trained in a self-supervised fashion, such models convey, to a certain extent, linguistic (Puccetti et al., 2021; Lin et al., 2019) and factual knowledge (Rogers et al., 2020; Meng et al., 2022). Being able to faithfully extract the desired knowledge is a crucial aspect that has sparked lots of interest (Petroni et al., 2019; Bouraoui et al., 2020). + +However, querying the PLM for information is not always reliable and requires more than a manually-written prompt as an input (Petroni et al., 2020). This is opposed to a standard knowledge graph (KG), where users formulate a structured SPARQL query specifying exactly what to expect at the output. For example, the query "SELECT ?x WHERE wd:Q76 wdt:P26 ?x" returns the spouse of Barack Obama, "Michelle Obama". In the PLM + +setting, the SPARQL query could be replaced by a natural-language prompt, such as "The spouse of Barack Obama is [MASK]". While the predictions of the prompt are reasonable (left-hand side of Figure 1), they do not reflect the requirement of getting instances of a specific type (names of people) in the output. In fact, in BERT's top-1 prediction on prompts where the desired output type is a MUSICAL INSTRUMENT (e.g., "Philip Glass plays [MASK]", more than half of the predictions follow different types such as SPORT ("plays football") and CHARACTER ("plays Hamlet"), instead of the expected "plays piano". Indeed, differently from the KG with typed entities, the type information is dismissed from the input prompt, thus bringing no guarantee about the expected type. + +While several works try to remedy this by engineering prompts to satisfy a desired type (Jiang et al., 2020; Shin et al., 2020; Zhong et al., 2021), or relying on external sources to enrich the prompt (Petroni et al., 2020), these approaches do not fully exploit the latent concepts encoded in the PLM (Dalvi et al., 2022). To fill up this gap, we introduce the notion of Type Embeddings (TEs). Similar to how positional embeddings in a PLM encode information about the position of a token in an input (Wang and Chen, 2020), TEs encode the expected type information of the output. The definition of a TE requires neither supervised training nor external resources as it simply uses the existing PLM token embeddings, e.g., people names, to obtain type information, e.g., for PERSON. TEs can be then naturally injected into the input embedding layer of a PLM to embody the expected type in the output (right-hand side of Figure 1). Driving the model towards the expected type can help in applications exploiting PLMs, such as data integration (Cappuzzo et al., 2020), data cleaning (Narayan et al., 2022), rule induction (Cui and Chen, 2021), and fact-checking (Lee et al., 2020). + +Our contributions can be summarized as follows: + +![](images/374b9a4b8b39dda809f9449d82f163eba7c64ee28446bb0931ca75e417781814.jpg) +Figure 1: Top-5 predictions of BERT (with log probabilities) for a given prompt (left) and the changes when adding type information (right). Tokens following the desired type are colored. Correct answer is underlined. + +- We introduce TYPE EMBEDDINGS (TEs), which, similar to positional embeddings, can be added to the input of PLMs and effectively encode type information. We show how to compute these embeddings using only labeled tokens that adhere to the specific type; the main idea is to remove the first singular vector of the token embedding matrix (Section 3). +- We propose methods to analyze type embeddings and evaluate their effectiveness by (i) measuring their semantic similarity to instances of the type, (ii) assessing the sensitivity of tokens to a given type, and (iii) analyzing layer-wise type classification (Section 4). +- We inject type embeddings into PLMs and show increase in performance for a factual probing dataset (LAMA) and alleviation of "type bias" for a prompt by steering the output type with TEs (Section 5). + +We conclude the paper by discussing future directions, including the extension of our approach from types to more generic concepts (Section 6). Data and code for the paper are available at https://github.com/MhmdSaid.id/TypeEmbedding. + +# 2 Related Work + +PLMs have been largely studied in the last years, with most analysis focusing on the attention mechanism (Voita et al., 2019; Vig and Belinkov, 2019) and on the role of embeddings (Rogers et al., 2020; Li et al., 2021; Clark et al., 2019). + +However, none of those efforts study the notion of types that we introduce. One exception is the recent studies of how concepts are encoded in PLMs. One work analyzes BERT by clustering contextual representations across layers, followed by a manual annotation to label clusters with meaningful concepts (Dalvi et al., 2022). Another work starts from + +treating the feedforward network of a transformer as a key-value memory and studies how certain vectors encode concepts in the vocabulary space (Geva et al., 2022). Our effort is different in two ways. First, we do not require the labeling of artifacts from the PLM, but rather we rely on user-specified tokens to model their common type. Second, we focus on type, which is one semantic concept, leaving the others, such as syntactic, morphological, and lexical to future work (Section 6). + +Our approach is related to the interpretation of a neural net's internal state in terms of a concept defined through a vector (Kim et al., 2018; Schrouff et al., 2021). The Concept Activation Vector (CAV) is derived from example images by finding the normal to the hyperplane separating examples without a concept and examples with a concept in the model's activation. CAVs separate examples with and without the target concept in a model's activation at a certain layer. By Testing with a CAV (TCAV), one can identify the importance of the color 'red' in fire-engine images for a neural network. We use CAVs on textual input, rather than on images, to measure how sensitive the model is to a type after adding its TE (Section 4.3). However, while CAV is a sensitivity measurement tool, TEs steer the target type in the model's output. A work sharing the same spirit as ours uses a vector to steer output in a PLM for style transfer between sentences (Subramani et al., 2022). However, our method requires only 10 tokens per type as opposed to 100 labeled sentences for style transfer, and it works also with GPT. + +Our work introduces a new kind of type embeddings to enrich the input to the PLM, in analogy to positional embeddings (Wang and Chen, 2020; Wang et al., 2021a). We show the benefit of our solution on the LAMA benchmark (Petroni + +![](images/b643194da70dd9732e1e31e56521f42697d1accdf94ed7855c71c39abcd78cac.jpg) +Figure 2: Input representation for a PLM. The YEAR type embedding (green box) is added to the [MASK] token. + +et al., 2019), which contains cloze statements to query the PLM for a masked token. To enhance a PLMs' performance for such task, previous work improve prompts by mining or paraphrasing new prompts (Jiang et al., 2020), by adding trigger tokens (Shin et al., 2020), by finding vectors for prompts in the embedding space without restriction to the PLM's vocabulary (Zhong et al., 2021), or by combining multiple prompts (Qin and Eisner, 2021). As we simply add the type embedding to the input, our work is also different from approaches that pre-train an adapter to enhance PLMs' factual knowledge (Wang et al., 2021b) or rely on information retrieval to provide additional context for the prompt (Petroni et al., 2020). Finally, we steer the output while not changing the underlying model by triggering the neurons responsible for a prediction (Dai et al., 2022) or by producing an alternative model with edited facts (De Cao et al., 2021). + +# 3 Type Embedding + +In this section, we propose how to compute TEs from PLM token embeddings (Section 3.1), and how to use them (Section 3.2). Following the work on latent concepts in BERT (Dalvi et al., 2022), we focus on such model and report results on other PLMs in Table A2 in the Appendix. + +# 3.1 Obtaining the TE + +Given a type $t$ , let the matrix $P_{t} \in \mathbb{R}^{n \times d}$ hold the token embeddings for $n$ different tokens, where $d$ is the dimension of the token embeddings. The $n$ tokens are instances of a specific type $t$ . We call these tokens positively typed tokens. + +For our analysis of $P_{t}$ , we apply Singular Value Decomposition (SVD). The SVD of an $m \times n$ matrix $M$ factorizes it into $M = U\Sigma V^{T}$ , where $U$ is an $m \times m$ unitary matrix, $\Sigma$ is an $m \times n$ diagonal matrix, and $V$ is an $n \times n$ unitary matrix. We call the column vectors of $U$ and $V$ singular vectors. The diagonal values in $\Sigma$ are called singular values. + +Assuming that $M$ is a matrix where each row contains features of a data point, then the first singular vector of $V$ , corresponding to the highest singular value, corresponds to the direction with maximum variance for the covariance matrix. In other words, it is the vector that contains the "common-part" of all data points. + +The SVD of the matrix is $P_{t} = U\Sigma V^{T}$ . The first column of the matrix $V$ , $v^{(1)}$ , is the first singular vector, which encodes information common between all $n$ tokens. We hypothesize that this vector, unlike other singular vectors, contains nontype related information and needs to be removed from the input to promote type information encoded in the other singular vectors (more details in Section 4.1). A similar observation has been made for multilingual representations (Roy et al., 2020), where removing $r$ singular vectors leaves semantic-related information in the input representations (Yang et al., 2021). Thus, the embedding to be added to promote type $t$ is $E_{t} = -\lambda v^{(1)}$ , where $\lambda$ is a multiplier that is tuned on a hold-out dataset. + +In practice, a type embedding is derived from a small set of tokens that are instances of the same type. Those can be provided by users, or obtained from existing typed resources such as KGs. In the rest of the chapter, the TEs are computed based on weighted sampling from KG entities. We query DBpedia (Auer et al., 2007) for tokens adhering to a specific type, keeping only those in the PLM's vocabulary, and use their node degree as the weight. + +# 3.2 Using the TE + +Assuming that a user has obtained the TE for the expected output type, the TE is simply added to the [MASK] input embedding, in analogy to token and positional embeddings. Figure 2 shows an example for a prediction where we enforce a YEAR type. + +Depending on the task at hand, the TE can be added to one or more tokens. We found it more effective to add it only to the [MASK] token for + +![](images/0b34b25384e6116ac5f182f6c35eec7e551d42b65d605c420ccb82de816ece65.jpg) +Figure 3: Distribution of the mean of the singular vectors across different types. We report the singular vectors with top-4 singular values. The distribution of the $v^{(1)}$ has the highest kurtosis. + +MLM tasks, while for text generation it is more effective to add the TE to all tokens in the prompt. While we focus on MLM, we report preliminary results for text generation in Section 6. + +# 4 Analysis of TEs + +Having obtained a TE, we propose a series of analysis methods to assess its validity. We use the TE as a simple type retriever (Section 4.1), study the distribution of singular vectors (Section 4.2), analyze the effect of the TE w.r.t. the output and quantify the model's sensitivity w.r.t. typed tokens (Section 4.3), perform layer-wise classification to identify the desired type (Section 4.4), and measure TCAV of a model equipped with a TE (Section 4.5). + +# 4.1 Similarity + +As the TE is computed from token embeddings, the vector for $E_{t}$ lives in the subspace formed by these embeddings. Therefore, we can use the TE to sort by distance token embeddings (through cosine similarity) as a qualitative confirmation that it reflects the desired type. Table 1 shows examples of TEs for three types (cities, years, and occupations) and the most similar token embeddings of BERT. This suggests that TEs could act as a standalone type retriever, to sort tokens according to type and analyze any biases in the tokens from which the TE is computed. Applying the method on the first singular vector $v^{(1)}$ (i.e., $-E_{t}$ ), we observe that the top retrieved tokens ('', 'and', 'the', ...) relate to syntax, suggesting that the first singular vector encodes syntactic aspects, in agreement with other work in multilingual representations (Roy et al., 2020), showing that such vectors encode non-semantic-related information (Yang et al., 2021). + +# 4.2 Distribution of Singular Vectors + +To understand the bias imposed by the first singular vector, we analyze the distributions of singular vectors, as it has been shown that distributions of singular vectors deviating from a Gaussian distribution contain bias (Shin et al., 2018). + +From Figure 3, we see that the distribution of the singular vector $v^{(1)}$ , corresponding to the largest singular value, clearly deviates from a Gaussian distribution, while others do not. This is indicated by the high kurtosis values for the first singular vectors. This suggests that this singular vector could represent a common bias that affects tokens (Shin et al., 2018). Note that since each singular vector is of dimension $d$ , and to plot the histogram, we report the mean of the singular vector. + +# 4.3 Effect of TE + +We introduce two metrics for measuring TE's effectiveness. + +Adversarial Accuracy. We expect that adding a TE to BERT causes the PLM to be more "type aware" in the associated task, i.e., adding the TE conveys type-related tokens in the output. For example, in an MLM task, adding the TE should rank higher the tokens following the associated type. In an NLG task, adding a TE should convey more type-related tokens in the generated text. We focus on the former and leave the latter for future work. + +To validate this hypothesis, we check if the score of a positively typed token in an MLM task for a model with the associated TE is greater than that of a standard BERT model. Formally, given a model $\mathcal{M}_t$ , with an MLM head that has been equipped with a TE $E_t$ promoting a specific type $t$ , we denote by $P_{\mathcal{M}_t}^{(x)}$ the normalized output score of the token $x$ with model $\mathcal{M}_t$ and prompt $pr$ . To assess the effectiveness of the TE, we compute this normalized probability to that of an adversary, a BERT model without any equipped TE. We define the metric adversarial accuracy (AA) as: + +$$ +A A = \frac {\left| \left\{x \in X _ {t +} \mid P _ {\mathcal {M} _ {t}} ^ {(x)} > P _ {\mathcal {M} _ {\emptyset}} ^ {(x)} \right\} \right|}{\left| X _ {t +} \right|} \tag {1} +$$ + +where $\mathcal{M}_{\emptyset}$ is a model without any TE, and $X_{t^{+}}$ is a set of tokens adhering to the type $t$ . A higher value indicates that the TE is able to promote PLM tokens following type $t$ . + +Adversarial Sensitivity. We also expect that adding the TE should make tokens following the + +
    Type Emb.Predictions
    CITYKazan(.69), Baku(.67), Cologne(.67), Düsseldorf(.63), Toulouse(.62), Strasbourg(.62), Bonn(.61)
    YEAR1823(.85), 1834(.83), 1819(.82), 1755(.82), 1825(.82), 1835(.82), 1805(.82)
    OCCUPATIONgeologist(.76), biologist(.73), theologian(.72), screenwriter(.7), botanist(.69), linguist(.68), novelist(.67)
    + +type more sensitive to the input TE. In other words, adding the TE in an MLM setting should cause these tokens to be more salient w.r.t. the input. To validate this hypothesis, we compare the sensitivity of a token w.r.t. the input in two models with and without a TE. If the former is greater than the latter, then the model is more sensitive to the typed token. + +More formally, given a model $\mathcal{M}$ , the output score of a token $x$ is $P^{(x)}(X_{[MASK]})^1$ . With a first-order Taylor series expansion, we obtain $S_{\mathcal{M}}^{(x)} = P^{(x)}(X_{[MASK]}) - P^{(x)}(\mathbf{0}) \approx \frac{\partial P^{(x)}(X_{[MASK]})^T}{\partial X_{[MASK]}} X_{[MASK]}$ , where $\mathbf{0}$ is the zero vector. + +$S_{\mathcal{M}}^{(x)}$ is reminiscent of metrics used in the neural network pruning literature (LeCun et al., 1989; Molchanov et al., 2017). However, the metric is applied w.r.t. a vector rather than to the usual case of scalar, and we do not take the absolute value of the metric as we focus on comparing sensitivities of models and not measuring an absolute effect. + +Finally, to test a TE, we compare the sensitivity to that of a standard BERT model. Similarly, we define adversarial sensitivity as the number of positive typed tokens whose sensitivity increased after adding TE to the number of positive typed tokens in a set $X_{t^{+}}$ . More formally: + +$$ +A S = \frac {\left| \left\{x \in X _ {t ^ {+}} \mid S _ {\mathcal {M} _ {t}} ^ {(x)} > S _ {\mathcal {M} _ {\emptyset}} ^ {(x)} \right\} \right|}{\left| X _ {t ^ {+}} \right|} \tag {2} +$$ + +For both measures, we report results over a sample of 100 tokens, making sure that every one is an instance of type $t$ and none of them has been used to derive the TE. We then compute the accuracy 10 times to get mean and standard deviation. To make sure that any change in the scores is due only to the TE, we set $pr = [MASK]$ . This simple prompt neglects any contextual information that might affect PLM tokens, thus ensuring that any change is due to the TE. + +Results for mean and standard deviation are reported in Table 2 for both $AA$ and $AS$ . For $AA$ , TEs perform well in promoting tokens respecting + +Table 1: Most similar token embeddings to a given Type Embedding with cosine similarity score in parentheses. Tokens in italic were used to compute the TE. + +
    TypeAAAS
    CITY.853 (.0166).82 (.014)
    LANGUAGE1 (0).860 (.012)
    OCCUPATION1 (0).893 (.018)
    + +Table 2: Mean and standard deviation (in parentheses) of ${AA}$ and ${AS}$ for different types $\left( {k = 1}\right)$ . + +![](images/12cb1636d7b5600c5f5289ba95d3044f7a5c84254ca0e49db73520ead05119a6.jpg) + +![](images/2cf3899614eea350a4950d221129f1f3ee8bcba03d070d6d36eb930ade7fe9b8.jpg) + +![](images/c930878e166b92795d420cb244e411027459c44cde816b4917d2ddb271f3048f.jpg) +Figure 4: F1 scores of three classifiers trained and tested on layer-wise embeddings of CITY (Ci), LANGUAGE (L), and ORGANIZATION (Org) datasets. + +a certain type. We observe a lower score for type CITY, which is likely due to (a) the large cardinality of the CITY type making it more difficult to model all required aspects of cities, and (b) coincidence of some city tokens with people names such as Morris, Salem, and Riley. + +For $AS$ , the TE has a small error margin. As we cannot expect token embeddings to capture all intricacies of a certain type, there are examples where the model fails the sensitivity test. Examples of failing tokens that did not show improvement in type sensitivity for CITY are Salvador and Blair, for LANGUAGE are Cherokee and Romani, and OCCUPATION are general and vicar. + +# 4.4 Layer-wise Classification + +As TEs are added at the input of the model, we postulate that adding TEs should help BERT identify types of input prompts more efficiently. For this, we train a layer-wise linear classifier on embed + +dings of input prompts, where positive instances are prompts belonging to a certain type $t$ and negative instances are prompts of other types (examples in Table 3). For each type, we sample 100 positive and negative instances from other LAMA datasets (negative instances are sampled randomly from the remaining types), and train a layer-wise linear classifier. We repeat each experiment 10 times and report mean accuracy on a test set of the same size. Prompts appearing in the train set do not appear again in the test set. Results in Figure 4 show that adding TE gives most layer classifiers an increase in F1-score. The highest increase is usually at a layer in the middle, in agreement with other work (Dalvi et al., 2022), possibly because this is where a type is formed (Geva et al., 2021; Jawahar et al., 2019). The highest increase is for LANGUAGE, likely due to the smaller cardinality of the type compared to CITY and ORGANIZATION. We obtain from these the classifiers the CAVs needed for TCAV in the following section. + +# 4.5 TCAV Sensitivity + +A Concept Activation Vector (CAV) is a vector in the direction of the values of a concept's set of examples (Kim et al., 2018). For example, given images showing the concept of the red color (positive samples) and images without it (negative samples), a linear classifier is trained on the activation at each layer to separate positive and negative samples. The normal to the hyperplane separating the samples is the CAV. By using CAVs (with directional derivatives), one can measure the sensitivity of an input w.r.t. a concept by gauging the sensitivity of model predictions to changes in inputs towards the direction of a concept. Thus, given a set of datapoints representing a certain concept, Testing with CAVs (TCAVs) provides means to compute the model's conceptual sensitivity across the input (Kim et al., 2018). As a final analysis measure, we posit that a model equipped with a TE should have higher TCAV values across layers. For this, we compute layer-wise TCAV using the CAVs in Section 4.4. Figure 5 shows the TCAV values for types CITY and LANGUAGE, comparing a vanilla BERT model $(\mathrm{k} = 0)$ and one equipped with TE $(\mathrm{k} > 0)$ for the last 4 layers. As TCAV computes the model's conceptual sensitivity across a set of inputs, we observe that with the right TE, the importance of the type becomes more salient, i.e., the sensitivity of model predictions w.r.t. types, such + +![](images/9f4e47bd65d29812f3634727a5cf28d3db52f50d64b26ec40c695fdd96d3b131.jpg) + +![](images/527448c4949b7d1532058b65f12b1c477795c97d23b52ea5aa9bf512de4840e7.jpg) + +![](images/cea9edee2a09987fce56dd48f549564560052dc3c1f744d37e05ef9fefd5aa26.jpg) + +![](images/b8e80e7bc591cd2877463d4f5e4926548ba3fd61f38cacbe88915d979b64f562.jpg) + +![](images/b6a280b0c586605b0d81ff8cb4e790fdf2546bdee490c27e4050f2e65908c315.jpg) + +![](images/7a8788abe8837af62d32979e2b00b8df22c019395c33da18c71d582f4058acab.jpg) + +![](images/9b3a29c8b8028e75871216df0097cc1cdaaf5cf768c8025a5451262004177442.jpg) +Figure 5: TCAV values for CITY (top) and LANGUAGE (bottom) datasets compared against the CITY (C), LANGUAGE (L), and ORGANIZATION (O) CAVs for layers 9-12 of BERT (left) and BERT+TE (right). + +![](images/eb7d537c46a1baf8c335e900ff1c5514bfcd890a8601d993fdae6ef7c485a5fe.jpg) + +as CITY at a certain layer, increases for a prompt and a TE associated with that type. + +# 5 Experiments + +The LAMA benchmark (Petroni et al., 2019) contains cloze statements to test PLMs' factual knowledge. First, we apply TEs to BERT and show increase in precision for most datasets (Section 5.1). We then enforce a change in the output with TEs (Section 5.2). Finally, we show the impact of the tokens that encode the TE (Section 5.3). + +# 5.1 LAMA + +We focus on the GRE and TREx datasets (ElSahar et al., 2018) as their prompts can be grouped into 17 output types from 38 datasets, with most examples covered by types CITY, LANGUAGE, and COUNTRY; examples for two types are in Table 3 (full list in Appendix A1). We remove prompts whose expected output is not in BERT's vocabulary and prompts containing more than one [MASK] token. This gives an upper bound on BERT's performance. + +As stated in Section 3.1, the type embedding is computed with weighted sampling from KG enti + +
    DatasetPrompt Example
    P27Albert II of Belgium is [MASK] citizen.
    P1376Cardiff is the capital of [MASK].
    P17Cairo American College is located in [MASK].
    P131Saharsa district is located in [MASK].
    P20Fredegund died in [MASK].
    P937Xavier Zubiri used to work in [MASK].
    + +Table 3: Examples of LAMA datasets grouped by output types COUNTRY (top) and CITY (bottom). + +
    P@1P@10P@50P@100
    B.223.509.740.845
    BTo.146.327.550.640
    PostTE.248.577.819.889
    BTE (our method).291.606.838.899
    + +ties (10, by default). To tune the $\lambda$ value of a TE, we use a hold-out dataset of $5\%$ for each dataset, and choose the value that maximizes precision. We report results on a BERT BASE CASED model. Further experiments with other PLMs show similar trends (results in Table A2 in Appendix). + +Intrinsic Evaluation. We compare BERT with TE (BTE) against standard BERT (B). As we assume that the user knows the desired output type, we also report for a baseline BERT + Token Type (BTo), which adds the expected type label (e.g., "the year") before the [MASK] token. We also report on a baseline PostTE which uses the TE at the output for re-ranking. The initial output score is added to the cosine similarity between the token embedding and the type embedding, controlled by a hyperparameter to adjust the importance of the similarity score. We choose the range of the hyperparameter to vary from 0 to 30 as in a similar work for natural language generation (Pascual et al., 2021). We also tested another baseline where we add the tokens used to derive the TE before the [MASK] token, as a signal of the desired types (Shin et al., 2020), but the results are lower than BTo. + +Aggregated (macro) precision@k $(\mathrm{P}@\mathrm{k})$ results over all datasets are reported in Table 4 (full results in Table A3 in Appendix). On average, our proposal clearly improves the results. We see improvements across most of the types using TEs. However, we do observe reduction of precision in a few types, where the main reason being the greedy selection of a non-optimal value of $\lambda$ . For type MANUFAC-TURER, setting $\lambda = 1$ (rather than $\lambda = 2$ ) improves + +Table 4: Mean precision over all LAMA datasets compared to intrinsic baselines. + +
    P@1P@10P@50P@100
    LPAQA.288.607.791.855
    BTE.317.650.868.920
    OptiPrompt.469.790.922.956
    BTE.356.697.876.930
    + +Table 5: Mean precision over all LAMA datasets compared to extrinsic baselines. Unsupervised BTE outperforms LPAQA, which uses supervised learning. Supervised OptiPrompt obtains higher precision as it searches for prompts in the embedding space. + +the results. For type SPECIALIZATION, while desired outputs such as mathematics and physics do exist in the KG samples, other nodes in the KG, such as teenager, Greek, and Sir have greater node degree and thus got selected in the sample for obtaining the TE. For the GROUP data, the value of $\lambda$ for the TE was 0, meaning that adding the TE would hurt performance. Analyzing the predictions, we believe this is due to the bias in the TE imposed by the KG as most samples are related to sport groups (such as FIFA, UEFA, and CONCACAF) thus producing a TE biased towards sports group which negatively impacts the predictions. We discuss other sampling methods in Section 5.3. Finally, the YEAR dataset shows lower performance. We believe this is due to BERT's inability to precisely capture numeracy (Wallace et al., 2019). For PostTE, our method, of using the TE at the input, produces better results, as using the TE at the output does not allow for the fusion between factual and type knowledge in the model. PostTE does push typed tokens to higher rankings (indicating also the effectiveness of TEs in modeling type), but adding TEs to the input is better in terms of performance. Plus adding TEs to the input is more universal:, as the output is usually controlled by the experiment type (binary classification, MLM, NLG,...), which might not always make it clear how to insert the TE, whereas the input is always fixed. One thing to note is that, with PostTE, out of the 38 different datasets used, 22/38 had an optimal value of $\lambda$ to be zero. Meaning that for most datasets, it did not improve results, as opposed to our method which had only 5/38 datasets with optimal $\lambda = 0$ . Extrinsic Evaluation. We evaluate our model against two supervised baselines. The first one, LPAQA (Jiang et al., 2020), uses mining-based methods to identify possible prompts for a given relation. The second baseline, OptiPrompt (Zhong et al., 2021), searches real-valued input vectors that + +
    PromptTEP@1P@10P@50P@100
    DoB-0000
    Ecity.153.404.613.701
    Optim city.194.444.614.719
    PoB-.244.533.728.808
    + +Table 6: Precision in predicting PoB (place of birth) for DoB (date of birth) prompts by adding CITY TE $(k = 5)$ . Results with TE are comparable to the PoB prompt. + +maximize the likelihood of the gold label on the training set using a gradient-based searching algorithm. Results in Table 5 show that our approach does better with fewer prompts, as LPAQA requires at least $10\mathrm{X}$ more prompts per example. For OptiPrompt, the supervised approach produces better results than our unsupervised method. However, the approach requires training data, which is not always available. In fact, the authors of the paper use only TREx relations as they can query the KG for more data, which is not the case for Google-RE datasets. Also, as the method uses 1000 data points for training, the authors had to rely on another KG to gather more samples. Our approach requires only 10 tokens per type. Finally, while training enhances performance, it also encodes certain regularities that models could exploit, such as being prone to over-predicting the majority class label, as reported for OptiPrompt, unlike our approach which keeps model parameters intact. + +# 5.2 Switching Types in Prompts + +LAMA authors provide manually written prompts that adhere to the desired type. For example, to get the PLACE OF BIRTH (PoB) of a person, they use the prompt "[X] was born in [Y]". while for the DATE OF BIRTH (DoB) of a person they use the prompt "[X] (born [Y])". These prompts follow from how sentences about date and place of birth are written in Wikipedia pages. In this experiment, we ponder whether TEs can enforce a different type given one of these two prompt structure. We use DoB prompts with the expected outcomes of PoB, where the goal is to steer the type of the output to a different type. For example, given "Barack (born [MASK])" (prompt for DoB), we set as expected output "Honolulu" (PoB answer). We remove examples for which the expected output is not in BERT vocabulary and are left with 1139 prompts. We then add the TE for CITY during inference. The results are shown in Table 6. As expected, without any TE, the precision score is zero as the output + +
    P@1P@10P@50P@100
    BTE.291.606.838.899
    Top10.336.660.856.907
    Bot10.235.534.764.846
    Unif.250.563.798.884
    + +Table 7: Mean over all datasets for every method. + +type is heavily influenced by the prompt. Adding $E_{city}$ to the input steers the model to change type and it outputs cities. However, the scores are still less than those of POB prompts. Since the prompt is biased towards a certain type, better results can be obtained by removing the projection of the year information onto the city TE. Our optimized TE is then $E_{city}^{optim} = E_{city} - \frac{E_{city}.E_{year}}{||E_{city}||_2||E_{year}||_2} E_{year}$ , which indeed shows improved results in Table 6. + +# 5.3 Token Sampling + +We study the impact of how tokens for TEs are sampled by (i) changing the sampling method, and (ii) varying the number of tokens used. + +Sampling Methods. We evaluate forms of obtaining tokens alternative to weighted sampling: (i) weighted sampling with node degrees as weights (BTE), (ii) using the Top-10 tokens w.r.t. node degree (Top10), (iii) using the Bottom-10 tokens (Bot10), and (iv) sampling uniformly without relying on node degree (Unif). We repeat the experiment in Section 5.1 with every sampling strategy and show results in Table 7. More detailed results are in Table A4 in the Appendix. + +We observe that Top10 and weighted sampling obtain comparable performance. While Top10 gets better results for COUNTRY, ORGANIZATION, and GENRE, other types such as YEAR, SPECIALIZATION and MANUFACTURER show lower precision because of the bias coming from the most popular KG samples. For example, Top10 samples only years in the $21^{st}$ century, specializations related to titles (duke, Sultan, and Sir rather than mathematics and physics), and it is biased towards car manufacturers (Fiat and Honda). Weighted sampling reduces such bias. For FOOTBALL POSITION, Unif does better as it has more variety in the sample with more tokens related to American football positions (quarter back and guard) rather than soccer positions only (goalkeeper and midfielder). + +In some cases, the bias in the KG reflects the bias in the test data. For OCCUPATION, the TE using Top10 does encode some bias as most tokens are related to artistic positions (musician, actor), but + +
    nP@1P@10P@50P@100
    0.223.509.740.845
    5.279.611.814.873
    10.291.606.838.899
    15.275.617.847.894
    20.298.644.859.905
    50.292.631.853.906
    + +this improves results as the same bias occurs also among the expected outputs. + +Varying Size of Samples. To study the effect of the number of tokens used in deriving the TE, we repeat the experiment in Section 5.1, while varying the number of tokens $n$ . Results are reported in Table 8. We observe that results peak between 10 and 20 samples, but even a small number of samples significantly improves the results compared to the original BERT without TE (n=0). + +# 6 Conclusion and Future Work + +We have introduced TEs as additional input for PLMs to better encode type information, proposed methods to analyze TEs, and tested them on the LAMA dataset. While initial results are promising, we identify two directions of research. + +More Precise Type Embeddings. Further analysis on the examples can lead to better TEs. One direction is to use also negative samples to compute the TE. This implies learning a vector that separates between samples as CAVs do. However, adding negative samples can bring more bias in the TE. This could be alleviated by performing some statistical hypothesis testing, as with CAVs (Kim et al., 2018). Another way to improve the effectiveness of our proposal is to combine vectors. Assuming a taxonomy of the types, different TEs can be combined, for example by subtracting for the one at hand, say PERSON, the ones that are not super or sub types, such as CITY and YEAR, as discussed for DoB in Table 6. + +From Types to Concepts. While we focus on types and TEs, our approach can be extended to include more generic concepts, as long as their tokens are in the PLM's vocabulary. This could help alleviate the stereotypical and toxic content found in PLMs (Ousidhoum et al., 2021). To test our idea, we report an example for the task of natural language generation, where we "de-toxify" text generated by an autoregressive language model. We use + +Table 8: Average of precision of the datasets while varying the number of samples $n$ to compute the TE. + +
    λToxicity (↓) Toxicity pr.Fluency (↓) Output ppl.Diversity (↑)
    Di-1Di-2Di-3
    Toxic Prompt0.6874.727.541.455.357
    -1.3896.340.602.476.377
    -2.35617.564.668.509.400
    Non-toxic Prompt0.0454.195.801.676.528
    -1.0774.038.782.622.484
    -2.0883.716.840.620.475
    + +Table 9: Results of detoxifying texts generated from a distilled GPT-2 model. $\lambda$ indicates the value of the multiplier of the TE ( $\lambda = 0$ for original PLM). + +a distilled GPT-2 model (Radford et al., 2019) and the RealToxicityPrompts dataset that contains 100K sentence-level prompts derived from a corpus of English text (Gehman et al., 2020). We feed 10K samples to the model, thus producing the generated texts. We then measure the toxicity of such texts with the Perspective $API^2$ . We consider a text toxic if the toxicity probability returned by the API is $>0.5$ and obtain 460 toxic prompts. We then compute a "toxicity concept embedding" using 6 manually picked tokens that convey toxicity. To de-toxify the generated text, we set the multiplier $\lambda$ to negative values. Instead of adding the embedding to the [MASK] token only, we found better results when adding it to all tokens in the prompt. We believe that adding the TE to all tokens helps to 'preserve' type information along the lengthy generation procedure, as opposed to MLM which decodes one token. We also test a sample of nontoxic prompts (same size as toxic prompts) to show the effect of our embedding. In addition to toxicity, we measure fluency (perplexity of generated continuations according to a larger PLM) and diversity (mean number of distinct uni-/bi-/trigrams, normalized by the length of text for each prompt), as in other works for text generation (Liu et al., 2021). + +Results in Table 9 show a huge reduction in the toxicity probability with $\lambda = -1$ , higher more diversity but slightly less fluency for the toxic prompt. Setting $\lambda = -2$ decreases further the toxicity probability, but at the expense of less fluency. For the non-toxic prompts, the toxicity results are nearly the same, with minor differences for fluency and diversity. Considering that a "concept vector" steers the generation of the PLM without any form of finetuning, it is promising to study the use of "plug-and-play" concept vectors. Examples are reported in Table B1 in the Appendix. + +# Limitations + +Encoding types requires a set of tokens and their embeddings. As we turn to PLMs, we are restricted by the tokens in its vocabulary, which limits the number of possible types for TEs. In addition, while we use TEs for a factual dataset, the TE encodes only type information and no factual information. While results improve for LAMA with TE, the interaction of type information and factual knowledge of the PLM is not understood. Finally, one cannot decide on a clear sampling method to use for computing the TEs (assuming the existence of a knowledge source such as a KG). The best sampling is heavily dependent on the distribution of the gold labels in the test dataset. + +# Ethics and Broader Impact + +We are aware of $(i)$ the biases and abusive language patterns (Bender et al., 2021) that PLMs impose, and $(ii)$ the imperfectness and the bias of using knowledge graphs. However, our goal in this paper is to study how PLMs can be made more 'type-aware'. For $(i)$ , there has been some work on debiasing PLMs (Liang et al., 2020), while for $(ii)$ , we use a KG in our work to have variety in the set of tokens, but could resort to user-specified ones validated by consensus to reduce the bias. + +# Acknowledgment + +This work has been partially supported by the ANR project ATTENTION (ANR-21-CE23-0037) and by gifts from Google. + +# References + +Sören Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary Ives. 2007. Dbumedia: A nucleus for a web of open data. The semantic web, pages 722-735. +Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? In FAccT, page 610-623. ACM. +Zied Bouraoui, José Camacho-Collados, and Steven Schockaert. 2020. Inducing relational knowledge from bert. In AAAI. +Riccardo Cappuzzo, Paolo Papotti, and Saravanan Thirumurugananthan. 2020. Creating embeddings of heterogeneous relational datasets for data integration tasks. In Proceedings of the 2020 International Conference on Management of Data, SIGMOD Conference 2020, + +online conference [Portland, OR, USA], June 14-19, 2020, pages 1335-1349. ACM. +Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019. What does BERT look at? an analysis of BERT's attention. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 276-286, Florence, Italy. Association for Computational Linguistics. +Wanyun Cui and Xingran Chen. 2021. Open rule induction. In Advances in Neural Information Processing Systems. +Damai Dai, Li Dong, Yaru Hao, Zhifang Sui, Baobao Chang, and Furu Wei. 2022. Knowledge neurons in pretrained transformers. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, pages 8493-8502, Dublin, Ireland. Association for Computational Linguistics. +Fahim Dalvi, Abdul Rafae Khan, Firoj Alam, Nadir Durrani, Jia Xu, and Hassan Sajjad. 2022. Discovering latent concepts learned in BERT. In International Conference on Learning Representations. +Nicola De Cao, Wilker Aziz, and Ivan Titov. 2021. Editing factual knowledge in language models. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6491-6506, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. +Hady ElSahar, Pavlos Vougiouklis, Arslen Remaci, Christophe Gravier, Jonathon S. Hare, Frédérique Laforest, and Elena Simperl. 2018. T-rex: A large scale alignment of natural language with knowledge base triples. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation, LREC 2018, Miyazaki, Japan, May 7-12, 2018. +Samuel Gehman, Suchin Gururangan, Maarten Sap, Yejin Choi, and Noah A. Smith. 2020. RealToxicityPrompts: Evaluating neural toxic degeneration in language models. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, pages 3356-3369, Online. Association for Computational Linguistics. +Mor Geva, Avi Caciularu, Ke Wang, and Yoav Goldberg. 2022. Transformer feed-forward layers build predictions by promoting concepts in the vocabulary space. *ArXiv*, abs/2203.14680. + +Mor Geva, Roei Schuster, Jonathan Berant, and Omer Levy. 2021. Transformer feed-forward layers are key-value memories. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5484-5495, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. +Ganesh Jawahar, Benoit Sagot, and Djamé Seddah. 2019. What does BERT learn about the structure of language? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3651-3657, Florence, Italy. Association for Computational Linguistics. +Zhengbao Jiang, Frank F. Xu, Jun Araki, and Graham Neubig. 2020. How can we know what language models know? Transactions of the Association for Computational Linguistics, 8:423-438. +Been Kim, Martin Wattenberg, Justin Gilmer, Carrie J. Cai, James Wexler, Fernanda B. Viégas, and Rory Sayres. 2018. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav). In ICML. +Yann LeCun, John Denker, and Sara Solla. 1989. Optimal brain damage. In Advances in Neural Information Processing Systems, volume 2. Morgan-Kaufmann. +Nayeon Lee, Belinda Li, Sinong Wang, Wen-tau Yih, Hao Ma, and Madian Khabsa. 2020. Language models as fact checkers? In Proceedings of the Third Workshop on Fact Extraction and VERIFICATION (FEVER), pages 36-41, Online. Association for Computational Linguistics. +Bai Li, Zining Zhu, Guillaume Thomas, Yang Xu, and Frank Rudzicz. 2021. How is BERT surprised? layerwise detection of linguistic anomalies. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4215-4228, Online. Association for Computational Linguistics. +Paul Pu Liang, Irene Mengze Li, Emily Zheng, Yao Chong Lim, Ruslan Salakhutdinov, and Louis Philippe Morency. 2020. Towards Debiasing Sentence Representations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5502-5515, Online. Association for Computational Linguistics. +Yongjie Lin, Yi Chern Tan, and Robert Frank. 2019. Open sesame: Getting inside BERT's linguistic knowledge. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 241-253, Florence, Italy. Association for Computational Linguistics. +Alisa Liu, Maarten Sap, Ximing Lu, Swabha Swayamdipta, Chandra Bhagavatula, Noah A. Smith, and Yejin Choi. 2021. DExperts: Decoding-time controlled text generation with experts and anti-experts. + +In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6691-6706, Online. Association for Computational Linguistics. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2020. RoBERTa: A Robustly Optimized BERT Pretraining Approach. +Kevin Meng, David Bau, Alex Andonian, and Yonatan Belinkov. 2022. Locating and editing factual knowledge in gpt. arXiv preprint arXiv:2202.05262. +Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, and Jan Kautz. 2017. Pruning convolutional neural networks for resource efficient inference. In 5th International Conference on Learning Representations, ICLR 2017, Toulouse, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net. +Avanika Narayan, Ines Chami, Laurel Orr, and Christopher Ré. 2022. Can foundation models wrangle your data? +Nedjma Ousidhoum, Xinran Zhao, Tianqing Fang, Yangqiu Song, and Dit-Yan Yeung. 2021. Probing toxic content in large pre-trained language models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4262-4274, Online. Association for Computational Linguistics. +Damian Pascual, Beni Egressy, Clara Meister, Ryan Cotterell, and Roger Wattenhofer. 2021. A plug-and-play method for controlled text generation. In *Findings of the Association for Computational Linguistics: EMNLP* 2021, pages 3973-3997, Punta Cana, Dominican Republic. Association for Computational Linguistics. +Fabio Petroni, Patrick Lewis, Aleksandra Piktus, Tim Rocttäschel, Yuxiang Wu, Alexander H. Miller, and Sebastian Riedel. 2020. How context affects language models' factual predictions. *AKBC*. +Fabio Petroni, Tim Rocttäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language Models as Knowledge Bases? In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2463-2473, Hong Kong, China. Association for Computational Linguistics. +Giovanni Puccetti, Alessio Miaschi, and Felice Dell'Orletta. 2021. How do BERT embeddings organize linguistic knowledge? In Proceedings of Deep + +Learning Inside Out (DeeLIO): The 2nd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures, pages 48-57, Online. Association for Computational Linguistics. +Guanghui Qin and Jas' Eisner. 2021. Learning how to ask: Querying lms with mixtures of soft prompts. In NAACL. +Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9. +Anna Rogers, Olga Kovaleva, and Anna Rumshisky. 2020. A Primer in BERTology: What We Know About How BERT Works. Transactions of the Association for Computational Linguistics, 8:842-866. +Uma Roy, Noah Constant, Rami Al-Rfou, Aditya Barua, Aaron Phillips, and Yinfei Yang. 2020. LAReQA: Language-agnostic answer retrieval from a multilingual pool. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5919-5930, Online. Association for Computational Linguistics. +Jessica Schrouff, Sebastien Baur, Shaobo Hou, Diana Mincu, Eric Loreaux, Ralph Blanes, James Wexler, Alan Karthikesalingam, and Been Kim. 2021. Best of both worlds: local and global explanations with human-understandable concepts. ArXiv, abs/2106.08641. +Jamin Shin, Andrea Madotto, and Pascale Fung. 2018. Interpreting word embeddings with eigenvector analysis. In Workshop on Interpretability and Robustness in Audio, Speech, and Language (IRASL). NeurIPS IRASL. +Taylor Shin, Yasaman Razeghi, Robert L. Logan IV, Eric Wallace, and Sameer Singh. 2020. AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4222-4235, Online. Association for Computational Linguistics. +Nishant Subramani, Nivedita Suresh, and Matthew Peters. 2022. Extracting latent steering vectors from pretrained language models. In *Findings of the Association for Computational Linguistics: ACL* 2022, pages 566-581, Dublin, Ireland. Association for Computational Linguistics. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc. +Jesse Vig and Yonatan Belinkov. 2019. Analyzing the structure of attention in a transformer language model. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 63-76, Florence, Italy. Association for Computational Linguistics. + +Elena Voita, David Talbot, Fedor Moiseev, Rico Senrich, and Ivan Titov. 2019. Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5797-5808, Florence, Italy. Association for Computational Linguistics. +Eric Wallace, Yizhong Wang, Sujian Li, Sameer Singh, and Matt Gardner. 2019. Do NLP models know numbers? probing numeracy in embeddings. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5307-5315, Hong Kong, China. Association for Computational Linguistics. +Benyou Wang, Lifeng Shang, Christina Lioma, Xin Jiang, Hao Yang, Qun Liu, and Jakob Grue Simonsen. 2021a. On position embeddings in {bert}. In International Conference on Learning Representations. +Ruize Wang, Duyu Tang, Nan Duan, Zhongyu Wei, Xuanjing Huang, Jianshu Ji, Guihong Cao, Daxin Jiang, and Ming Zhou. 2021b. K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 1405-1418, Online. Association for Computational Linguistics. +Yu-An Wang and Yun-Nung Chen. 2020. What Do Position Embeddings Learn? An Empirical Study of Pre-Trained Language Model Positional Encoding. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6840-6849, Online. Association for Computational Linguistics. +Ziyi Yang, Yinfei Yang, Daniel Cer, and Eric Darve. 2021. A simple and effective method to eliminate the self language bias in multilingual representations. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5825-5832, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. +Zexuan Zhong, Dan Friedman, and Danqi Chen. 2021. Factual probing is [mask]: Learning vs. learning to recall. In *North American Association for Computational Linguistics (NAACL)*. + +# A LAMA + +Dataset statistics are reported in Table A1. Detailed results on the datasets are reported in Table A3. A full inference run on all LAMA datasets takes on average approximately 5 minutes on Google Colab with a Tesla P100 with a batch size of 32. We vary $\lambda$ from 0 to 5. We repeat the experiment in Section 5.1 with every sampling strategy and report results in Table A4. For LANGUAGE, all sampling + +
    TypeTotal SizeDatasetSizeSample
    Country (Co)3796P495896The Sharon Cuneta Show was created in [MASK].
    P27948Albert II of Belgium is [MASK] citizen.
    P1376196Cardiff is the capital of [MASK].
    P1001665National Congress of Honduras is a legal term in [MASK].
    P530174Vanuatu maintains diplomatic relations with [MASK].
    P17917Cairo American College is located in [MASK].
    Football Position (FP)737P413737Curt Flood plays in [MASK] position.
    Manufacturer (Ma)878P176878iPod shuffle is produced by [MASK].
    Organization (Org)837P108342David Dimbleby works for [MASK].
    P178495iPod Classic is developed by [MASK].
    Occupation (Occ)915P106915Murray Grand is a [MASK] by profession.
    Year GRE (Y (GRE))1821date_of_birth1821Emily Ballou (born [MASK]).
    Genre (Ge)849P136849Boyd Raeburn plays [MASK] music.
    Group (Gr)212P463212Russian Football Union is a member of [MASK].
    Language (L)4118P407756The Pirate Bay was written in [MASK].
    P103954The native language of Jan Davidsz. de Heem is [MASK].
    P1412921Leone Caetani used to communicate in [MASK].
    P37707The official language of Iitti is [MASK].
    P364780The original language of Do Phool is [MASK].
    Specialization (Sp)533P101533John Archibald Wheeler works in the field of [MASK].
    Religious Position (RelP)727P39727John Joseph Williams has the position of [MASK].
    Record Label (Rec)256P264256Amr Mostafa is represented by music label [MASK].
    City (Ci (GRE))3689place_of_birth2925Jacques Autreau was born in [MASK].
    place_of_death764Robert Jack died in [MASK].
    City (Ci)6490P131774Saharsa district is located in [MASK].
    P20844Fredegund died in [MASK].
    P937864Xavier Zubiri used to work in [MASK].
    P19704James Jackson Putnam was born in [MASK].
    P740643Standard Bank was founded in [MASK].
    P190283Inverness and [MASK] are twin cities.
    P36400The capital of Realm of Stefan Dragutin is [MASK].
    P159700The headquarter of Shelbourne F.C. is in [MASK].
    P47542Campi Bisenzio shares border with [MASK].
    P276736Hiroshima International Animation Festival is located in [MASK].
    Continent (Con)964P30964Dominion Range is located in [MASK].
    Musical Instrument MI739P1303739Kerry King plays [MASK].
    TV Network (TVN)806P449806The New Dick Van Dyke Show was originally aired on [MASK].
    Religion (Rel)452P140452Muhammad Ali Jinnah is affiliated with the [MASK] religion.
    + +Table A1: LAMA datasets grouped by type. Each dataset belongs to the TREx dataset, unless otherwise stated by (GRE). + +methods outperform the weighted method. This is due to the non-optimal value of $\lambda$ produced for one dataset that reduced the average value. Indeed, setting a more suitable value for $\lambda$ , pushes the precision scores comparably to other sampling methods. Surprisingly, for RELIGIOUS POSITION, Bot10 produces better results on all metrics except $P@1$ . This is because most of the golden labels of the data related to religious positions for Christianity, while using Top10 includes a position for Judaism (rabbi), which is not the case for Bot10 and Unif. Finally, similar results are observed for GROUP and CONTINENT simply because there were less than 10 tokens for each type from the KG. + +
    P@1P@10P@50P@100
    Bl0.2450.5230.7290.811
    BITE0.2970.5820.7770.849
    Rob0.0730.2350.4000.479
    RobTE0.1770.3310.4810.589
    + +Table A2: Mean over all datasets for Bert Large (Bl) and Roberta base (Rob) with and without TEs.. + +# B Generated Text + +Examples of text generated with TEs are reported in Table B1. + +
    P@1 P@10 P@50 P@100
    CoBtoBto0.9320.2690.4270.520
    PostTEBto0.3330.5490.8380.888
    BTE0.3930.6430.8740.916
    FPBto0.2030.5100.6570.730
    PostTEBto0.2930.5100.8830.977
    BTE0.2760.5000.8560.896
    MaBto0.8650.9450.9820.988
    PostTEBto0.8590.9390.980.987
    BTE0.9230.9230.9700.978
    OrgBto0.3470.7330.930.981
    PostTEBto0.3470.7330.930.981
    BTE0.2790.7300.9550.977
    OccBto0.0020.0890.4630.839
    PostTEBto0.0020.0890.4630.839
    BTE0.0120.0230.3050.941
    GeBto0.0020.0890.4630.839
    PostTEBto0.0020.0890.4630.839
    BTE0.5940.6900.8340.844
    BTE0.5890.6860.8310.841
    GrBto0.0250.2140.6520.801
    PostTEBto0.0250.2140.6520.801
    BTE0.6920.8210.8610.886
    SpBto0.0250.2140.6520.801
    PostTEBto0.0250.2140.6520.801
    BTE0.6920.9210.8610.886
    ReBto0.0250.2140.6520.801
    PostTEBto0.0250.2140.6520.801
    BTE0.6920.210.8610.886
    CrBto0.0250.2140.6520.801
    PostTEBto0.0250.2140.6520.801
    BTE0.6920.1420.3530.967
    ClBto0.0250.2140.6520.801
    PostTEBto0.0250.2140.6520.801
    BTE0.6920.5770.7680.849
    CtBto0.0250.2140.6520.801
    PostTEBto0.0250.2140.6520.801
    BTE0.6920.4610.7680.849
    ConBto0.0250.2140.6520.801
    PostTEBto0.0250.2140.6520.801
    BTE0.6920.7680.8490.980
    M1Bto0.0640.3900.5530.614
    PostTEBto0.0640.3900.5530.614
    BTE0.6920.7680.8490.980
    M1Bto0.0640.3900.5530.614Bto0.107
    PostTEBto0.0640.3900.5530.614Bto0.107
    BTE0.6920.7680.8490.980Bto0.107
    M1Bto0.0640.3900.5530.614Bto0.107
    PostTEBto0.0640.3900.5530.614Bto<0.07
    BTE0.6920.7680.8490.980Bto<0.07
    + +Table A3: Average precision scores for different types of the LAMA dataset for BERT (B), BERT with additional explicit type token (BTo), TE applied at the output (PostTE), and BERT with TE (BTE). + +
    P@1 P@10 P@50 P@100
    CoTop100.24070.7080.8910.930
    Bot100.3260.6080.8360.903
    Unit0.3550.6010.830.904
    FPBot100.00.040.6840.746
    Unit0.2230.5610.8590.911
    MaTop100.7700.9210.9740.984
    Bot100.8650.9450.9820.9880.985
    Unit0.8650.9450.9820.9880.985
    OrgTop100.6060.8660.9660.979
    Bot100.3070.6650.9060.9510.951
    Unit0.2750.5880.8970.9510.951
    OccTop100.0870.5470.8490.921
    Bot100.0020.0890.480.8450.845
    Unit0.0010.0890.4960.8720.872
    Y (GRE)Bot100.0100.1090.3770.578
    Unit0.0180.1470.6250.8030.803
    GeBot100.0060.4160.7030.806
    Unit0.0430.0570.3560.6360.636
    GrBot100.6920.8210.8610.8860.886
    Unit0.6920.8210.8610.8860.886
    LBot100.6120.8940.9750.9890.989
    Unit0.6030.8950.9760.9930.993
    SpBot100.0810.3560.5770.6880.688
    Unit0.6030.8950.9760.9930.993
    ReIPBot100.00.5060.9540.9620.962
    Unit0.5060.9540.9620.9620.962
    C:(GRE)Bot100.3000.6000.9550.9590.959
    RecBot100.1440.4280.7610.8810.881
    Unit0.1480.4360.7610.8810.881
    C:(GRE)Bot100.0810.3560.5770.6880.688
    RecBot100.1440.4280.7610.8810.881
    Unit0.1480.4360.7610.8810.881
    + +Table A4: Average precision scores for different types of the LAMA dataset with various sampling methods to compute the TE. + +
    Generation
    GPT-2I like a lot of the things that your parents did at the age of 9 that made you happy, and now you're in my life. "The former wrestler said he is glad he didn't pass that test. "They thought I was a bad idea, as far as it goes in life," Mayweather said of his parents and co-stars. "But that went way overboard for me. I went from having a bad childhood and now living in the Bronx with my aunt and her little sister to a guy who's trying to put it all in the right place."That brought me back to the top of my wrestling career and I think the most special person is David Letterman, who gave me that opportunity...
    I like a lot of the things that your guys are doing in the dressing room that the team does, and that's what we do.
    Q: You have said in a couple of different emails that you wanted to go back to Switzerland, but that's a different subject as well.
    A: Well, you will return. I have spoken to the players and said, that way, to go back to France and see the success.
    GPT2 + COUNTRY TEQ: What's the biggest point of contention from last week?
    A: I was on the bench in the first half of the game against France, although today I was on the bench. But yesterday I was in the team too. It was a big point of contention. And then tomorrow I will go to Switzerland. I had a talk with the coaches this morning and they said to him there were plenty people in the dressing room.
    Q: Will the U.S. play at the Emirates?
    A: That's a possibility for sure...
    GPT-2 + YEAR TEI like a lot of the things that she is doing at the moment, but that is for the time being." She also said she has a lot of work ahead in mind when he arrives on an episode of "The Simpsons." For a fuller picture of her as a show host and actor see "What Happened to the Kids on the Island of the Gummy"? A documentary record (1990), also by David R. Seidman. Also on H&M Records. The last time she appeared on a TV show was on "Saturday Night Live" in 2007, and in "My Brother's Wedding" on ABC in August 2007. The first time she appeared on "The Tonight Show, with Johnny Carson" was in 1996. She also won the Tony Award with "The Little Princess" from the Writers Guild and the Outstanding Dramas Award in 2002. When interviewed on the TV show she recalled that she has worked as the 'Gummy' and 'Grandma' of the show, and gave the following list of roles for "My Brother's Wedding, with Johnny Carson" of her own first year. The '90s came in the last season of "How I Met Your Mother" which she co-starred on...
    GPT-2I hate RelG, yet RelG are the ones who are allowed to carry out this murderous act by terrorising RelG in Religious Country and then going on to kill innocent RelG in the future."...
    GPT-2 + AF-FECTION TEI hate RelG, yet RelG are the ones who are allowed to carry the torch of Religion and fight injustice and oppression. You and I will continue to fight this oppression till the end and in the long run, in every age we are going forward. You will do whatever is necessary to keep our sisters on the right path...
    + +Table B1: We report a set of truncated generated texts using GPT-2 and how they change using TEs for COUNTRY and YEAR. We also try to remove hate speech using an AFFECTION embedding derived from keywords such as love and cheerful. We replace specific keywords by more general keywords. RelG stands for a religious group that has been used in the underlined prompt and has been hidden for ethical considerations. \ No newline at end of file diff --git a/youaremytypetypeembeddingsforpretrainedlanguagemodels/images.zip b/youaremytypetypeembeddingsforpretrainedlanguagemodels/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..17a60d0c43ee1cdf42ff00f992db5b94495af6e2 --- /dev/null +++ b/youaremytypetypeembeddingsforpretrainedlanguagemodels/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:430f7529c47cf38f9c00248409ba9ff0b1102c354c0008bad0e9b93b093e97b9 +size 1276198 diff --git a/youaremytypetypeembeddingsforpretrainedlanguagemodels/layout.json b/youaremytypetypeembeddingsforpretrainedlanguagemodels/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..badfdb850a2b9fc3543ad217f94386f8ca18c54e --- /dev/null +++ b/youaremytypetypeembeddingsforpretrainedlanguagemodels/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:05530f02112805a455d02d77158964057622f9df50bde00d6806aac0d2beb70a +size 462228 diff --git a/youarewhatyoutalkaboutinducingevaluativetopicsforpersonalityanalysis/f780d932-3fd7-41ef-a42f-3292878ce9bb_content_list.json b/youarewhatyoutalkaboutinducingevaluativetopicsforpersonalityanalysis/f780d932-3fd7-41ef-a42f-3292878ce9bb_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..41710c93824a2e8a6e312c92e4c77de4ac2d52ec --- /dev/null +++ b/youarewhatyoutalkaboutinducingevaluativetopicsforpersonalityanalysis/f780d932-3fd7-41ef-a42f-3292878ce9bb_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7f0bfb591ad40465b5b7f51f2ae44bd3fe673b5dabf7406b323d6d21a09aa928 +size 82834 diff --git a/youarewhatyoutalkaboutinducingevaluativetopicsforpersonalityanalysis/f780d932-3fd7-41ef-a42f-3292878ce9bb_model.json b/youarewhatyoutalkaboutinducingevaluativetopicsforpersonalityanalysis/f780d932-3fd7-41ef-a42f-3292878ce9bb_model.json new file mode 100644 index 0000000000000000000000000000000000000000..5d4e39350d9b733ab406d2b79134c781d3e0262d --- /dev/null +++ b/youarewhatyoutalkaboutinducingevaluativetopicsforpersonalityanalysis/f780d932-3fd7-41ef-a42f-3292878ce9bb_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:222be388b09d8bad42c11d6bd841c53c3f283c379feb8214b13e4262171cf6d3 +size 103456 diff --git a/youarewhatyoutalkaboutinducingevaluativetopicsforpersonalityanalysis/f780d932-3fd7-41ef-a42f-3292878ce9bb_origin.pdf b/youarewhatyoutalkaboutinducingevaluativetopicsforpersonalityanalysis/f780d932-3fd7-41ef-a42f-3292878ce9bb_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..e51e11579136c241821ff39241ccd1af7aac2c01 --- /dev/null +++ b/youarewhatyoutalkaboutinducingevaluativetopicsforpersonalityanalysis/f780d932-3fd7-41ef-a42f-3292878ce9bb_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ccd500c3ce6abbc905378fad662cd50f587926a88e5e9f63c405527b598bf5a7 +size 476132 diff --git a/youarewhatyoutalkaboutinducingevaluativetopicsforpersonalityanalysis/full.md b/youarewhatyoutalkaboutinducingevaluativetopicsforpersonalityanalysis/full.md new file mode 100644 index 0000000000000000000000000000000000000000..18a930237b61eaf6c02fe1e12a67cd796a97d138 --- /dev/null +++ b/youarewhatyoutalkaboutinducingevaluativetopicsforpersonalityanalysis/full.md @@ -0,0 +1,283 @@ +# You Are What You Talk About: Inducing Evaluative Topics for Personality Analysis + +Josip Jukić Iva Vukojevic Jan Šnajder +University of Zagreb, Faculty of Electrical Engineering and Computing +Text Analysis and Knowledge Engineering Lab +Unska 3, 10000 Zagreb, Croatia + +{josip.jukic;iva.vukojevic;jan.snjader}@fer.hr + +# Abstract + +Expressing attitude or stance toward entities and concepts is an integral part of human behavior and personality. Recently, evaluative language data has become more accessible with social media's rapid growth, enabling large-scale opinion analysis. However, surprisingly little research examines the relationship between personality and evaluative language. To bridge this gap, we introduce the notion of evaluative topics, obtained by applying topic models to pre-filtered evaluative text from social media. We then link evaluative topics to individual text authors to build their evaluative profiles. We apply evaluative profiling to Reddit comments labeled with personality scores and conduct an exploratory study on the relationship between evaluative topics and Big Five personality facets, aiming for a more interpretable, facet-level analysis. Finally, we validate our approach by observing correlations consistent with prior research in personality psychology. + +# 1 Introduction + +Sharing opinions has always been rooted in people's daily habits, but nowadays, it has scaled up with a plethora of user-generated texts on social media (Lee and Ma, 2012). Opinions, as dispositions toward specific entities, can be the key to understanding human behavior. Oftentimes, there is a need to predict sentiment or stance toward a particular target of interest, making opinion analysis useful in marketing research, social and political sciences, and recommender systems, to name a few. Moreover, analyzing how we express our opinions, i.e., the linguistic aspect and the psychological disposition of the opinion holder, is crucial for various downstream tasks, such as personality analysis and mental health prediction. + +In public communication, among other situations, people use linguistic resources known as evaluative language (Hunston, 2010) to express their attitudes. Thompson and Hunston (2000) define + +![](images/25d9f04802cbe9bb6f389a8f4759ac7af0886839def80d7a0cbc92b8fad254b2.jpg) +Figure 1: An illustrative example of mapping user statements to evaluative topics, i.e., topics induced from evaluative texts. Based on the prevalence of such topics, we construct an evaluative profile of each author. We then measure the correlation between each author profile and personality facets, such as artistic interests, assertiveness, intellect, and imagination. + +evaluation as a cover term for unified phenomena of certainty and goodness/desirability, and evaluative language includes every expression of the speaker or writer's attitude or stance towards, or feelings about the entities or propositions. As evaluation is associated with psychological disposition (Malrieu, 1999), evaluative language can help uncover people's distinguishing qualities, i.e., recurrent behavioral, cognitive, or affective tendencies. Such behavioral differences resulting from interaction with the environment fall within the purview of personality psychology, where they are typically examined as personality traits – sets of characteristic patterns of behavior, cognition, and feelings stable over time and across situations (Funder, 2012). + +Moreover, personality traits have been shown to correlate with occupational preferences, political or religious tendencies, intelligence, personal interests, and opinions (Larson et al., 2002; Ackerman and Heggestad, 1997; Hyman, 1957). + +To the best of our knowledge, evaluative language has not yet been studied in the context of personality analysis on social media. This can perhaps be attributed to methodological challenges. One of the difficulties in analyzing evaluative language is separating the signal from the noise. Although evaluation in text is pervasive, it remains a challenge to automatically distinguish evaluative from non-evaluative text. Evaluative expressions come in all sort of flavors, with varying amounts of explicit lexical stance markers. Another difficulty is choosing the right level of abstraction both for personality and evaluative language. If the evaluation targets are too specific, it is difficult to relate them to personality. Nevertheless, we can choose the appropriate operationalization of the personality in terms of its breadth, thus aligning it with the abstraction level of the evaluative targets. + +In this paper, we study how evaluative language on social media can be used for personality analysis. More specifically, our study aims to explore how the topics people talk about are related to their personality, as illustrated in Figure 1. To this end, we introduce the notion of an evaluative topic, which is a topic constructed from evaluative text. To address the signal-noise problem in detecting evaluative text, we develop an iterative filtering technique based on detecting evaluative expressions and paraphrase mining. We then leverage topic models to induce evaluative topics from prefiltered evaluative text. Aiming to maximize coherence and diversity, we test several topic model families, including probabilistic, decompositional, and neural topic models. Finally, we link evaluative topics to individual text authors to construct authors' evaluative profiles. + +In the experimental part, we investigate the relationship between evaluation and personality. Our focus is on the Big Five model of personality (Goldberg, 1981), which includes broad personality traits and their more specific (and arguably more interpretable) aspects called facets. We present a correlation study of evaluative topics and Big Five facets on comments from Reddit, one of the most popular discussion websites, which covers an astoundingly diverse range of topics while preserving + +user anonymity. We find correlations between evaluation and personality that are consistent with prior research in personality psychology. Finally, we provide empirical evidence that evaluative statements have a stronger association with personality than non-evaluative expressions. + +In summary, our contribution is threefold: we (1) develop an approach for evaluative author profiling based on topic models with evaluative filtering, (2) apply evaluative profiling on Reddit comments, and (3) study how evaluative author profiles of Reddit users correlate with Big Five personality facets. + +The rest of the paper is structured as follows. Section 2 lays out the background and related work. We describe our technique for filtering evaluative text along with topic modeling in Section 3. We conclude the section by presenting the idea of evaluative author profiles. We describe our experiments and results in Section 4. Finally, Section 5 overlays the main conclusions and takeaways of the paper. + +# 2 Background and Related Work + +Evaluation language. In linguistics, studies of evaluative language are embedded in functional approaches to language that aim to understand and explain linguistic structures in terms of the semantic and communicative functions of language and which assume that its primary function is to be a vehicle for social interaction (Allan, 2007). Most research in evaluative language covers only a particular niche of evaluation, such as subjectivity (Wiebe et al., 2004) and stance (Kiesling et al., 2018). For instance, Biber and Finegan (1989) describe lexical and grammatical markers of stance, i.e., categorizations along the continuum of epistemic and attitudinal meanings, developing groups of markers for people's attitude, affect, and assessment. More recently, Pavalanathan et al. (2017) compiled a lexicon of stance markers on Reddit data. On the other hand, because sentiment analysis is ubiquitous in NLP, evaluative language has been viewed through the prism of sentiment (Benamara et al., 2017; Pang and Lee, 2008). In our work, we attempt to combine several aspects of evaluative language, namely sentiment, stance, and opinion markers. + +Target awareness. Given the nature of expressing opinions, evaluation is directed toward a particular target. Among others, sentiment-based approaches to evaluation have been prevalent, which led to the establishment of aspect-based sentiment + +analysis (ABSA), an NLP task that combines aspect extraction with sentiment classification (Pontiki et al., 2014). Recent research in ABSA has evolved from traditional perspectives, e.g., using conditional random fields for aspect extraction as a sequence labeling task (Toh and Wang, 2014), to neural-based approaches (He et al., 2017; Luo et al., 2019; Hoang et al., 2019; Tulkens and van Cranenburgh, 2020). However, ABSA systems are better suited for smaller domains. Besides ABSA, there have been target-based approaches in stance classification (Du et al., 2017) and topic-dependent argument classification (Reimers et al., 2019), among others. As we are trying to cover diverse domains, we opt for a topic-level approach to targets. + +NLP and personality. NLP has been used extensively for personality analysis, starting with essay analysis (Pennebaker and King, 2000) and followed by social network research (Park et al., 2014; Schwartz et al., 2013). The more recent studies were mainly conducted with Facebook data. For example, Kulkarni et al. (2018) used trait-level factor analysis for the Big 5 personality system on Facebook. However, as far as we know, there is no research that leverages NLP to analyze personality in the context of evaluative language. Moreover, there is little research in NLP that covers facet-level personality analysis. + +# 3 Methodology + +We propose a three-step process from raw text to evaluative profiles. We start with evaluative filtering to extract evaluative text and then construct topics from the filtered dataset. Finally, we use the developed evaluative topics to create the corresponding evaluative profiles. + +# 3.1 Dataset + +We used PANDORA, a dataset with personality scores for Reddit users, extracted from self-reported personality questionnaire results (Gjurković et al., 2021). In total, there are 1,608 users with self-reported Big Five scores. These users have written a total of 1.3M comments, consisting of 14.3M sentences. Additionally, 127 users self-reported their questionnaire scores for the NEO PI-R facets. NEO PI-R provides information on six facets of each Big Five personality trait. A facet is a specific and unique aspect of a broader + +personality trait. For example, anxiety and depression are facets of neuroticism, while friendliness and gregariousness are extraversion facets. After we applied preprocessing, which included segmenting the comments into sentences, we ended up with 6.5M sentences. We filtered out non-English comments since we used models pre-trained on English texts. We attach more details on the preprocessing step in Appendix A.1. + +# 3.2 Evaluative filtering + +Although social media abounds with evaluative language, it is challenging to automatically distinguish evaluative from non-evaluative text. To filter out non-evaluative expressions, we collected comments with evaluative markers – lexical or grammatical language forms that express the act of evaluation. We combined off-the-shelf sentiment analysis tools with opinion and stance lexicons to cover different aspects of evaluative language. On top of that, we searched for sentences that contain evaluative patterns, i.e., phrases that express opinions. + +To extract sentiment-laden sentences, we used VADER (Hutto and Gilbert, 2014), where we summed the positive and negative VADER scores to determine the overall sentiment score. We used the opinion lexicon devised by Hu and Liu (2004) and the lexicon of stance markers Pavalanathan et al. (2017). Inspired by Hunston and Thompson (2000), we compiled a list of evaluative patterns in the form of regular expressions that we matched across the dataset. These include phrases such as I like/hate/believe/support, personally, in my (honest) opinion, etc. Along with variations of the aforementioned phrases, we extended the regular expressions to support negations and modifiers, as well as the vernacular language used in the online community (e.g., IMO – in my opinion, FMPOV – from my point of view). To collect matched sentences, we set a constraint that a given match must exceed the thresholds for sentiment, opinion, and stance scores. We took the intersection of matched sentences in the 50th percentile for each category except sentiment, for which we set an upper bound. This is because we found that sentences with extreme sentiment scores often do not have an explicit target, and the target cannot be induced without additional context. $^{2}$ + +After applying evaluative filtering on PANDORA, we obtained 29k sentences, focusing on precision as we used strict pattern matching with high evaluative scores. To improve recall, we developed quasi-snowballing (QSB), a simple paraphrase mining technique. $^{3}$ QSB is an iterative procedure that starts with a seed set of evaluative expressions and extends it with similar statements. We employed sentence transformers $^{4}$ (Reimers and Gurevych, 2019) to compute contextualized representations and use these representations to detect paraphrases. We initialized QSB with the filtered evaluative sentences as the seed set. Afterward, we used cosine similarity as a criterion to extract similar sentences according to the similarity threshold $t_{sim}$ , which is adjusted at each step by the similarity growth factor $\gamma$ . Inspired by simulated annealing, the factor $\gamma$ increases the threshold exponentially to achieve easier convergence as the whole set of sentences grows with each iteration. At the end of each iteration, we augmented the old seed set with the newly obtained matches. The process stops when there are no more candidates. Using QSB, we obtained 310k sentences with evaluative markers (details in Appendix A.2). + +QSB mines expressions that can differ in target, polarity, or intensity. Since we use a high similarity threshold, in most cases only one of the above three components will be different in a paraphrase match. Multiple iterations of paraphrase mining can evolve the original sentence, resulting in more lenient matches overall, as shown in Table 1. + +# 3.3 Evaluative topics + +In the second step, we applied topic modeling to the pre-filtered evaluative comments to obtain evaluative topics. We adopted the definition of a topic as a distribution over a fixed vocabulary of terms (Blei and Lafferty, 2009), and further defined an evaluative topic as a topic built from target-specific opinions derived from evaluative language. To produce evaluative topics, we experimented with traditional probabilistic models as well as neural-based architectures. Furthermore, since we are dealing with Reddit comments, which tend to be short, we considered several topic modeling approaches for short texts. + +LDA. From the family of traditional topic models, we opted for the latent Dirichlet allocation (LDA), a well-established probabilistic topic model (Blei et al., 2003), which also serves as a strong baseline. Considering that LDA deals poorly with short texts due to the data sparsity problem (Hong and Davison, 2010), following Zuo et al. (2016), we grouped comments into larger documents. For each author, we created pseudo-documents grouped by subreddit, i.e., a forum dedicated to a particular topic on Reddit. + +BTM. We tested the biterm topic model (BTM), which is primarily designed for short texts (Cheng et al., 2014). We fed the model with comment-level text, where we filtered out non-evaluative sentences from the comment. + +ABAE. We tried out a neural-based architecture in Attention-based Aspect Extraction (ABAE), an autoencoder proposed by He et al. (2017). We include this model as it is designed specifically for aspect clustering, which we expect to align well with our objective of building evaluative topics. ABAE exploits word embeddings to improve coherence and attempts to group together words that appear in similar contexts. Furthermore, ABAE uses the attention mechanism to reduce the relative importance of irrelevant words during training. We adopted ABAE but made a few changes to the training procedure. First, we trained custom Word2Vec embeddings (Mikolov et al., 2013) on PANDORA. We also modified the reconstruction loss function from the ordinary dot product to match cosine similarity with values in the $[-1,1]$ interval, which improved the learning stability. We fed the model with segmented evaluative sentences to learn the topics. + +CTM. Finally, we experimented with the combined topic model (CTM) from (Bianchi et al., 2021), which has been shown to increase coherence compared to other topic models. CTM is a blend of a variational autoencoder and embeddings from a sentence transformer. + +Topic models are usually evaluated by means of topic coherence. However, this method suffers from a validation gap, i.e., automated coherence is not validated by human experimentation (Hoyle et al., 2021). To mitigate this problem, we count + +
    Original hitParaphrase matchSimilarity
    Age is inexcusable to lie about IMO.I'm not a fan of lying about age..86
    I support the death penalty, but I would never label myself as pro-life.I'm pro-life in the sense that I would rather not have people abort later term when it could be reasonably considered a person....77
    I'm torn on this one, because I support trans folks 100% and I believe that a trans woman is a woman, end of story, and same for a man.I'm supportive of transgender people transitioning and being legally treated as someone of the opposite sex..78
    + +Table 1: Examples of QSB paraphrase matching. The first column represents the comments that are matched to regular expressions with matched text shown in bold. The second column lists matched paraphrases. + +
    NPMIIRBO#topics
    LDA-.1413.990520
    BTM-.2159.824130
    ABAE-.0521.983320
    CTM.0628.998120
    + +Table 2: Topic modeling evaluation results. NPMI is a coherence score in $[-1,1]$ range, with $-1$ indicating that the topic representative terms occurred together, 0 indicating that the term occurrences were independent of each other, and 1 indicating that the terms co-occurred perfectly with each other. IRBO measures topic diversity with 1 for completely different and 0 for identical topics. We repeated each experiment 10 times with different seeds. Shown in bold is the score that is significantly better than the scores for the rest of the models (independent Wilcoxon test for each model pair with $p < .01$ , adjusted for family-wise error rate with the Holm-Bonferroni method). + +token co-occurrences on the whole dataset and not only on the training data, which has been shown to have a stronger association with human judgment (Ding et al., 2018). Specifically, we used the normalized pointwise mutual information (NPMI) to evaluate the coherence of the induced topics. For evaluating diversity, we adopted the metric proposed by Bianchi et al. (2021), defined as the reciprocal of the standard RBO (Terragni et al., 2021). We used NPMI and IRBO as two criteria of a multi-objective optimization procedure. We determined the Pareto front of the trained models, i.e., the set of non-dominated models, from which we chose the model with the smallest number of topics (Table 2). + +# 3.4 Evaluative author profiles + +In the third step, we leverage evaluative topics to produce evaluative author profiles. Specifically, we describe each user in terms of topic preva + +ence, where user text is distributed as a sentiment-weighted mixture of topics across different targets. Each user is thus assigned the average topic prevalence. We compute the average topic distributions for the entire collection of user sentences, where a given sentence contributes with a value from the [0, 1] interval for each topic. + +To formalize our procedure, we begin by defining a topic distribution for a specific document $\mathbf{d} = [c_1c_2\ldots c_K]^\top$ , where $\mathbf{d}$ represents a document (e.g., sentence, comment), and $c_{k}$ is the corresponding mixture component for the $k$ -th topic. We aggregate the values of the user's evaluative sentences to compute the $n$ -th user's components for each topic and concatenate the aggregations into the vector $\mathbf{u}^{(n)}$ : + +$$ +\mathbf {u} ^ {(n)} = \frac {1}{N _ {n}} \sum_ {i = 1} ^ {N _ {n}} \mathbf {d} ^ {(n, i)}, +$$ + +where $N_{n}$ is the number of the $n$ -th user's documents and $\mathbf{d}^{(n,i)}$ is the $i$ -th document of the $n$ -th user. Moreover, we incorporate the sentiment intensity information to obtain a sentiment-enhanced representation $\mathbf{v}^{(n)}$ : + +$$ +\mathbf {v} ^ {(n)} = \frac {1}{N _ {n}} \sum_ {i = 1} ^ {N _ {n}} s ^ {(n, i)} \mathbf {d} ^ {(n, i)}, +$$ + +where $s^{(n,i)}$ is the sentiment intensity for the $i$ -th document of the $n$ -th user calculated as the sum of the positive and negative VADER scores. + +The use of sentiment intensity in lieu of polarity deserves further explanation. When considering sentiment polarity, we can discern two types: the user's sentiment polarity for a given statement and the implicit polarity (Russo et al., 2015) of the topic itself. This gives rise to a number of different ways in which sentiment information can be incorporated into evaluative representations. However, the inclusion of sentiment polarity makes it + +difficult to distinguish whether the topic sentiment is driven primarily by implicit or explicit polarity. We sought parsimony, primarily to facilitate explainability, and thus chose to use only sentiment intensity. Future work may look into combining both types of sentiment. + +# 4 Experiments + +Our investigation of the association between evaluative language and personality proceeded in two steps. We first built evaluative author profiles from Reddit comments from PANDORA, as described in the previous section. To build evaluative topics, we used the CTM model on the filtered evaluative sentences, as CTM yielded the best result in terms of coherence and diversity scores. We induced 20 evaluative topics, as per the results of the optimization process. Table 3 shows a sample of the topics alongside manually assigned labels. + +In the next step, we carried out a correlation study on the relationship of Big Five personality facets and evaluative topics. We adopted the Revised NEO Personality Inventory (NEO PI-R), designed to measure the Big Five personality traits: openness to experience, conscientiousness, extraversion, agreeableness, and neuroticism (Costa Jr and McCrae, 1995). In the first part of the analysis, we examined individual correlations between particular topics and facets. In the second part, we used canonical correlation analysis (CCA) to explore common associations between the entire set of topics on the one side and the set of all Big Five facets on the other. We chose to focus on personality facets rather than personality traits, hypothesizing that the abstraction level of facets aligns better with the granularity of induced topics. In that sense, the choice of facets supports the Brunswik symmetry principle, which stipulates that analyzed constructs (in our case, personality and topics) must have similar levels of generality (Wittmann, 2012). + +# 4.1 Pairwise correlations + +We calculated partial pairwise correlations between evaluative author profiles and Big Five facets with control for gender as a possible confounder. For gender, we used the values provided with the PANDORA dataset, which surpassed the $F_{1}$ score of .90 (Gjurković et al., 2021).7 Figure 2 shows sig- + +nificant correlations corrected for false discovery rate with the Benjamini-Hochberg method. We observe numerous small correlations and even moderate correlations $(> .4)$ for some topic-facet pairs (Bosco et al., 2015). This is surprising, given that the average correlation in questionnaire-based studies of individual differences is .19 (Gignac and Szodorai, 2016) and that text data are noisier than questionnaire data. + +To assess the validity of our results, we looked into personality psychology literature for reference. According to psychological research, facets of openness to experience are expected to correlate positively with curiosity, food/drinks, fiction, music, gaming, political, personality, debate, and occult topics, while they correlate negatively with the religion topic (Stewart et al., 2022; Ozer and Benet-Martinez, 2006; Soto, 2019; Intiful et al., 2019; Skimina et al., 2021; Church et al., 2008; Marshall et al., 2015; Chauvin and Mullet, 2021). As Figure 2 shows, the majority of our results are consistent with mentioned prior research. Three listed associations that we do not significantly support are positive correlations with debate (hypothesized direction, but not significant), gaming (hypothesized direction, but not significant), and occult (opposite sign). However, we note that in most of the referenced psychological literature, effects were analyzed at the trait level rather than the facet level. + +# 4.2 Canonical correlation analysis + +As facets are inherently intertwined, we can gain deeper insight from analyzing the facets jointly with respect to the set of evaluative topics. To this end, we employed canonical correlation analysis (CCA). The goal of CCA is to find a linear combination of evaluative profiles on the one side and facets on the other, with the goal of maximizing the correlation between the newly created canonical variables. CCA assumes that both variable sets have multivariate normal distributions, which we confirmed with the Henze-Zirkler test (Henze and Zirkler, 1990). We computed 20 canonical dimensions and sorted them in descending order + +
    LabelRepresentants
    openpeople, lot, free, good, permit, time, going, open, big, trying
    food/drinkstaste, meat, smell, beer, texture, flavour, savory, drink, palate, sweet
    religiontestimony, believe, truth, god, witness, primordial, bible, church, jesus, earth
    demeaningshit, tell, talking, strangers, rude, stupid, bitch, upset, weird, shitty
    investing/financerecommend, pack, vouch, opt, buying, invest, fund, ticket, stock, dealt
    fictionwatch, character, read, movies, story, books, thought, cool, new, anime
    musicsongs, album, pop, favourite, music, voice, listen, lyrics, track, sound
    gamingplay, game, level, team, damage, skill, hit, gear, combat, dungeons
    social issuessociety, human, religion, culture, rights, moral, white, argument, victimhood, black
    hatreddespise, hate, passion, stand, fucking, goddamn, nerd, mad, smug, ads
    day-to-dayday, work, hours, week, spend, food, money, sleep, home, eat
    relationshipsrelationship, woman, dating, child, partner, man, sex, ex, gay, alimony
    argumentopinion, salt, grain, sanctimonious, worthless, controversial, assessment, disclaimer, 180, nuanced
    politicalgovernment, party, vote, trump, support, country, state, speaker, system, tax
    sexual/lookshot, hair, sexy, gross, facial, body, attractive, wear, porn, dimorphism
    personalitytype, personality, mbti, emotions, test, cognitive, learning, ideas, brain, jung
    debateunderstand, discussion, post, saying, person, wrong, think, point, internet, debate
    maltreatmentdespised, hated, school, dropped, sucked, harass, refuses, skipped, hood, beaten
    occultfate, chime, cat, spirits, guides, luck, desperately, talisman, invite, mirror
    + +Table 3: Evaluative topics produced by the CTM. The topic labels shown in first column were manually assigned as the most frequent label among five annotators. The second columns show the top 10 words for each topic. + +![](images/b7d5cfe75e5e130121672503cfdc7035d3dcd53c7d91cdc97734ec9e9d86cd7e.jpg) +Figure 2: Pairwise partial correlations between Big 5 facets (x-axis) and evaluative topics (y-axis) with control for gender. We show only significant correlations $(p < .01)$ , determined using Fisher's z-transformation of correlations and corrected for false discovery rate with the Benjamini-Hochberg method. + +of correlation magnitude. $^{9}$ We found statistically significant correlations for the first three pairs of canonical variates with Wilks' lambda test. $^{10}$ + +An additional question we wanted to answer is whether personality analysis benefits from evaluative topics (constructed from pre-filtered evaluative text) as opposed to ordinary topics (constructed from all text). To this end, we applied CCA to three different data subsets: (1) unfiltered text, (2) text obtained with evaluative filtering, and (3) non + +evaluative text, obtained as the difference between the first and the second set. In addition, we ran CCA on (4) evaluative text with sentiment intensity in order to investigate whether evaluative profiles benefit from sentiment information. Figure 3 shows the canonical correlations of the first three canonical pairs on the four data subsets. Higher correlations computed by CCA indicate stronger associations between the two sets of variables. Thus, under the assumption that evaluative profiles and personality are associated, obtaining higher correlations supports construct validity of evaluative profiles as construed by our model. Construct validity, in this case, concerns the extent to which an evaluative topic accurately assesses what it is designed for. With this in mind, two main observations emerge from the results: (1) evaluative prefiltering seems to be more apt than using unfiltered data for establishing an association with Big 5 personality and (2) sentiment information amplifies the correlations. + +To further investigate the relationship between facets and evaluative topics as well as their individual importance, we computed canonical loadings, i.e., canonical structure correlations, by projecting both sets of original variables onto the first and second canonical dimensions (Figure 4). Canonical loadings reflect the shared variance of the observed variable and the canonical variate. We followed the common practice in psychometrics (ter Braak, 1990) and computed the intra-set variable-to-variate correlation for the instrument variable (evaluative topic), and the inter-set correlation for the goal variable (facet). We identified three distinct clusters of facets and topics. The first one is at the bottom right corner of Figure 4 (music, fiction; artistic-interests, imagination) and it indicates the openness aspect of openness to experiences. The bottom-left cluster (maltreatment, demeaning; vulnerability, immoderation, and excitement-seeking) roughly corresponds to unpleasantly emotionally charged cluster. Finally, the top left cluster (political, social issues; activity level, assertiveness) can be interpreted as social engagement. We further validate the CCA results by showing the dispersion of facets in the first and second canonical dimensions (Figures 5 and 6 in the Appendix). We find that facets from the same domain are grouped but that there are also associations between facets from different domains, which is expected based on prior research (Schwaba et al., 2020). + +![](images/cc2df1847155d1f3f653d2d35f672c044ca936d064a4a84e719894b00a63c35b.jpg) +Figure 3: Canonical correlations of the first three canonical pairs $(p < .01)$ estimated on four data subsets: original text (Unfiltered), text obtained with evaluative filtering (Eval), Eval with sentiment information included when calculating the evaluative profiles (Eval + Senti), and non-evaluative text (Non-eval) obtained as the set difference between Non-filtered and Eval sets. + +![](images/39feeb2415120d8a7eb37e5a99f03e64260184eb4413323d6715bb0422dd5af6.jpg) +Figure 4: Canonical loadings of the first two dimensions for facets and evaluative topics. The proximity of individual points indicates the strength of association between two data points. Specifically, when two points, regardless of their type (topic or facet), have similar angles with respect to the origin and similar magnitudes, this hints that the two points contribute similarly to the canonical variate and have stronger associations. + +# 5 Conclusion + +The relationship between evaluative language and personality has been understudied in NLP research. We aim to fill this gap by proposing evaluative author profiling – linking text of authors to topics obtained from text filtered for evaluative language. We applied evaluative profiling on a dataset of Reddit comments with self-reported Big 5 personality facet scores. Using canonical correlation analysis, we showed that the facets within the same trait have stronger associations in the canonical space. We + +found that evaluative topics have moderate correlations with Big 5 facets. Moreover, we corroborate the hypothesis that evaluative expressions hold greater informational value for personality analysis than unfiltered texts. Additionally, we showed that non-evaluative text has a much weaker association with Big 5 facets compared to evaluative text. Finally, we observed correlations consistent with previous research in personality psychology. We believe that our study can contribute to a better understanding of evaluative language on social media and how it relates to personality traits. + +# Ethical considerations + +Our research has been approved by an academic IRB. + +Potential harm. Our study is an exploratory study, hence one cannot generalize based on the correlations we obtained. Otherwise, that could lead to unsupported generalization, which may discriminate against certain groups of people. + +Collecting data from users. According to the American Psychological Association's Ethical Principles, researchers may waive informed consent when analysing archival data (i.e., data collected before the study began) if disclosure of responses would not expose participants to risk of criminal or civil liability or harm. In our case, we use Reddit data, where Reddit users agree via Reddit User Agreement that they will not disclose any sensitive information of others and they are informed that their comments are publicly available. As users may opt out and delete their data, we removed deleted user accounts and comments. We also present our results at the group level rather than the individual level to further protect users' identities. + +Misuse potential. In principle, evaluative profiling could be used for making decisions at the individual level based on their social media textual data, e.g., in micro-targeting. We strongly advocate against the use of our methods for these or other ethically questionable applications. + +Biases. We note that we used self-reported data from Reddit. As such, this data may not be perfectly accurate and may include various biases, notably the acquisition bias. The dataset we use may not be representative of Reddit population, and it + +is certainly not representative of any wider populations of users. + +# Limitations + +Although our evaluative filtering technique resulted with evaluative topics that have stronger correlations with Big 5 facets than non-evaluative text, filtering cannot be validated in isolation without additional annotation. Additionally, incorporating additional information such as sentiment polarity can hurt interpretability of the results. A possible solution to circumvent this problem is to use structural topic models (Roberts et al., 2014), which can model additional information such as user demographics as covariates. Since we use topic modeling and try to optimize topic diversity and coherence, which has been criticized as of late (Hoyle et al., 2021), it can be hard to choose appropriate representatives for the topic and there are no guarantees that a certain topic will be coherent enough to translate to a meaningful concept. From the psychology point of view, we conduct only an exploratory study where we observe correlation, so we cannot make any confirmation of the results from personality psychology, but only support it with associations. Moreover, since we conduct a facet-level study and take only the scores from the same questionnaire to mitigate the noise from different tests, the number of reports of facet scores is relatively small $(n = 127)$ compared to the total number of users with reported traits scores $(n = 1,608)$ . + +Our study was limited to English texts. A potential transfer to other languages would require additional resources, namely adequate lexicons and sentiment classifier to enable evaluative filtering, making the transfer non-trivial. Finally, we would need to retrain the topic models on texts in the target languages. + +# Acknowledgements + +We thank the reviewers for their comments. We also thank Irina Masnikosa for her invaluable linguistic advice. This work has been fully supported by the Croatian Science Foundation under the project IP-2020-02-8671 PSYTXT ("Computational Models for Text-Based Personality Prediction and Analysis"). + +# References + +Phillip L Ackerman and Eric D Heggestad. 1997. Intelligence, personality, and interests: evidence for overlapping traits. Psychological bulletin, 121(2):219. +Keith Allan. 2007. The western classical tradition in linguistics. Equinox London. +Farah Benamara, Maite Taboada, and Yannick Mathieu. 2017. Evaluative language beyond bags of words: Linguistic insights and computational applications. Computational Linguistics, 43(1):201-264. +Federico Bianchi, Silvia Terragni, and Dirk Hovy. 2021. Pre-training is a hot topic: Contextualized document embeddings improve topic coherence. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 759-766, Online. Association for Computational Linguistics. +Douglas Biber and Edward Finegan. 1989. Styles of stance in english: Lexical and grammatical marking of evidentiality and affect. Text - Interdisciplinary Journal for the Study of Discourse, 9(1):93-124. +David M Blei and John D Lafferty. 2009. Topic models. In Text mining, pages 101-124. Chapman and Hall/CRC. +David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. J. Mach. Learn. Res., 3:993-1022. +Frank Bosco, Kulraj Singh, James Field, and Charles Pierce. 2015. Correlational effect size benchmarks. Journal of Applied Psychology, 100:431-449. +Bruno Chauvin and Etienne Mullet. 2021. Individual differences in paranormal beliefs: The differential role of personality aspects. Current psychology, 40(3):1218-1227. +Xueqi Cheng, Xiaohui Yan, Yanyan Lan, and Jiafeng Guo. 2014. Btm: Topic modeling over short texts. IEEE Transactions on Knowledge and Data Engineering, 26(12):2928-2941. +A Timothy Church, Marcia S Katigbak, Jose Alberto S Reyes, Maria Guadalupe C Salanga, Lilia A Miramontes, and Nerissa B Adams. 2008. Prediction and cross-situational consistency of daily behavior across cultures: Testing trait and cultural psychology perspectives. Journal of Research in Personality, 42(5):1199-1215. +Paul T Costa Jr and Robert R McCrae. 1995. Domains and facets: Hierarchical personality assessment using the revised neo personality inventory. Journal of personality assessment, 64(1):21-50. +Ran Ding, Ramesh Nallapati, and Bing Xiang. 2018. Coherence-aware neural topic modeling. In Proceedings of the 2018 Conference on Empirical Methods + +in Natural Language Processing, pages 830-836, Brussels, Belgium. Association for Computational Linguistics. +Jiachen Du, Ruifeng Xu, Yulan He, and Lin Gui. 2017. Stance classification with target-specific neural attention. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17, pages 3988-3994. +David C Funder. 2012. Accurate personality judgment. Current Directions in Psychological Science, 21(3):177-182. +Gilles E Gignac and Eva T Szodorai. 2016. Effect size guidelines for individual differences researchers. *Personality and individual differences*, 102:74-78. +Matej Gjurković, Mladen Karan, Iva Vukojevic, Michaela Bošnjak, and Jan Šnajder. 2021. PANDORA: talks: Personality and demographics on Reddit. In Proceedings of the Ninth International Workshop on Natural Language Processing for Social Media, pages 138-152, Online. Association for Computational Linguistics. +Lewis R Goldberg. 1981. Language and individual differences: The search for universals in personality lexicons. Review of personality and social psychology, 2(1):141-165. +Ruidan He, Wee Sun Lee, Hwee Tou Ng, and Daniel Dahlmeier. 2017. An unsupervised neural attention model for aspect extraction. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 388-397, Vancouver, Canada. Association for Computational Linguistics. +Norbert Henze and Bernd Zirkler. 1990. A class of invariant consistent tests for multivariate normality. Communications in Statistics-theory and Methods, 19:3595-3617. +Mickel Hoang, Oskar Alija Bihorac, and Jacobo Rouces. 2019. Aspect-based sentiment analysis using BERT. In Proceedings of the 22nd Nordic Conference on Computational Linguistics, pages 187-196, Turku, Finland. Linköping University Electronic Press. +Liangjie Hong and Brian D Davison. 2010. Empirical study of topic modeling in twitter. In Proceedings of the first workshop on social media analytics, pages 80-88. +Alexander Hoyle, Pranav Goel, Andrew Hian-Cheong, Denis Peskov, Jordan Boyd-Graber, and Philip Resnik. 2021. Is automated topic model evaluation broken? the incoherence of coherence. In Advances in Neural Information Processing Systems, volume 34, pages 2018-2033. Curran Associates, Inc. +Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 168-177. + +Susan Hunston. 2010. Corpus approaches to evaluation: Phraseology and evaluative language, volume 13. Routledge. +Susan Hunston and Geoffrey Thompson. 2000. Evaluation in text: Authorial stance and the construction of discourse: Authorial stance and the construction of discourse. Oxford University Press, UK. +Clayton Hutto and Eric Gilbert. 2014. Vader: A parsimonious rule-based model for sentiment analysis of social media text. In Proceedings of the International AAAI Conference on Web and Social Media, volume 8. +Herbert H Hyman. 1957. An exploration into opinions and personality. World Politics, 10(1):144-153. +Freda Dzifa Intiful, Emefa Gifty Oddam, Irene Kretchy, and Joana Quampah. 2019. Exploring the relationship between the big five personality characteristics and dietary habits among students in a Ghanaian university. BMC psychology, 7(1):1-7. +Scott F. Kiesling, Umashanthi Pavalanathan, Jim Fitzpatrick, Xiaochuang Han, and Jacob Eisenstein. 2018. Interactional stancetaking in online forums. Computational Linguistics, 44(4):683-718. +Vivek Kulkarni, Margaret L. Kern, David Stillwell, Michal Kosinski, Sandra Matz, Lyle Ungar, Steven Skiena, and H. Andrew Schwartz. 2018. Latent human traits in the language of social media: An open-vocabulary approach. PLOS ONE, 13(11):1-18. +Lisa M. Larson, Patrick J. Rottinghaus, and Fred H. Borgen. 2002. Meta-analyses of big six interests and big five personality factors. Journal of Vocational Behavior, 61(2):217-239. +Chei Sian Lee and Long Ma. 2012. News sharing in social media: The effect of gratifications and prior experience. Computers in human behavior, 28(2):331-339. +Ling Luo, Xiang Ao, Yan Song, Jinyao Li, Xiaopeng Yang, Qing He, and Dong Yu. 2019. Unsupervised neural aspect extraction with sememes. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19, pages 5123-5129. International Joint Conferences on Artificial Intelligence Organization. +Jean Pierre Malrieu. 1999. *Evaluative semantics: Language, cognition, and ideology*. Psychology Press. +Tara C Marshall, Katharina Lefringhausen, and Nelli Ferenczi. 2015. The big five, self-esteem, and narcissism as predictors of the topics people write about in facebook status updates. *Personality and Individual Differences*, 85:35-40. +Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. + +Daniel J Ozer and Veronica Benet-Martinez. 2006. Personality and the prediction of consequential outcomes. Annual review of psychology, 57:401. +Bo Pang and Lillian Lee. 2008. Opinion mining and sentiment analysis. Found. Trends Inf. Retr., 2(1-2):1-135. +Gregory Park, H. Schwartz, Johannes Eichstaedt, Margaret Kern, Michal Kosinski, David Stillwell, Lyle Ungar, and Martin Seligman. 2014. Automatic personality assessment through social media language. Journal of personality and social psychology, 108. +Umashanthi Pavalanathan, Jim Fitzpatrick, Scott Kiesling, and Jacob Eisenstein. 2017. A multidimensional lexicon for interpersonal stancetaking. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 884-895, Vancouver, Canada. Association for Computational Linguistics. +James Pennebaker and Laura King. 2000. Linguistic styles: Language use as an individual difference. Journal of personality and social psychology, 77:1296-312. +Maria Pontiki, Dimitris Galanis, John Pavlopoulos, Harris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014. SemEval-2014 task 4: Aspect based sentiment analysis. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 27-35, Dublin, Ireland. Association for Computational Linguistics. +Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982-3992, Hong Kong, China. Association for Computational Linguistics. +Nils Reimers, Benjamin Schiller, Tilman Beck, Johannes Daxenberger, Christian Stab, and Iryna Gurevych. 2019. Classification and clustering of arguments with contextualized word embeddings. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 567-578, Florence, Italy. Association for Computational Linguistics. +Margaret E Roberts, Brandon M Stewart, Edoardo M Airoldi, K Benoit, D Blei, P Brandt, and A Spirling. 2014. Structural topic models. Retrieved May, 30:2014. +Irene Russo, Tommaso Caselli, and Carlo Strapparava. 2015. SemEval-2015 task 9: CLIPEval implicit polarity of events. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pages 443-450, Denver, Colorado. Association for Computational Linguistics. + +Ted Schwaba, Mijke Rhemtulla, Christopher J Hopwood, and Wiebke Bleidorn. 2020. A facet atlas: Visualizing networks that describe the blends, cores, and peripheries of personality structure. *PloS one*, 15(7):e0236893. +H. Andrew Schwartz, Johannes C. Eichstaedt, Margaret L. Kern, Lukasz Dziurzynski, Stephanie M. Ramones, Megha Agrawal, Achal Shah, Michal Kosinski, David Stillwell, Martin E. P. Seligman, and Lyle H. Ungar. 2013. Personality, gender, and age in the language of social media: The open-vocabulary approach. PLOS ONE, 8(9):1-16. +Ewa Skimina, Jan Cieciuch, and Włodzimierz Strus. 2021. Traits and values as predictors of the frequency of everyday behavior: Comparison between models and levels. Current Psychology, 40(1):133-153. +Christopher J Soto. 2019. How replicable are links between personality traits and consequential life outcomes? the life outcomes of personality replication project. *Psychological Science*, 30(5):711-727. +Ross David Stewart, René Möttus, Anne Seeboth, Christopher John Soto, and Wendy Johnson. 2022. The finer details? the predictability of life outcomes from big five domains, facets, and nuances. Journal of personality, 90(2):167-182. +Cajo ter Braak. 1990. Interpreting canonical correlation analysis through biplots of stucture correlations and weights. Psychometrika, 55:519-531. +Silvia Terragni, Elisabetta Fersini, and Enza Messina. 2021. Word embedding-based topic similarity measures. In Natural Language Processing and Information Systems: 26th International Conference on Applications of Natural Language to Information Systems, NLDB 2021, Saarbrücken, Germany, June 23-25, 2021, Proceedings, page 33-45, Berlin, Heidelberg. Springer-Verlag. +Geoffrey Thompson and Susan Hunston. 2000. Evaluation: an introduction. In Evaluation in text: Authorial stance and the construction of discourse. Oxford University Press. +Zhiqiang Toh and Wenting Wang. 2014. DLIREC: Aspect term extraction and term polarity classification system. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 235-240, Dublin, Ireland. Association for Computational Linguistics. +Stéphan Tulkens and Andreas van Cranenburg. 2020. Embarrassingly simple unsupervised aspect extraction. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3182-3187, Online. Association for Computational Linguistics. +Janyce Wiebe, Theresa Wilson, Rebecca Bruce, Matthew Bell, and Melanie Martin. 2004. Learning subjective language. Computational Linguistics, 30(3):277-308. + +Werner W Wittmann. 2012. Principles of symmetry in evaluation research with implications for offender treatment. Antisocial behavior and crime. Contributions of developmental and evaluation research to prevention and intervention, (2011):357-368. +Yuan Zuo, Junjie Wu, Hui Zhang, Hao Lin, Fei Wang, Ke Xu, and Hui Xiong. 2016. Topic modeling of short texts: A pseudo-document view. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '16, page 2105-2114, New York, NY, USA. Association for Computing Machinery. + +# A Technical details + +# A.1 Dataset + +We selected the comments from users with Big Five self-reports on PANDORA. We kept only the comments in English and removed unlikely candidates for evaluative expressions, including comments that were too short – those with fewer than five words – and noisy comments – those consisting of $50\%$ or more non-alphanumeric characters. We segmented the comments into sentences using spaCy's English pipeline en_core_web-lg12 to proceed at a finer level. Once again, we discarded texts that were too short, but this time the sentences with fewer than three words. + +# A.2 Evaluative filtering + +When we employed QSB to mine paraphrases of the initial set of evaluative expressions, we aimed for a tenfold increase in size of the seed set (29k sentences), so we tuned the values of $t_{sim}$ and $\gamma$ . We used grid search with the parameter range $t_{sim} \in [0.50, 0.90]$ with step 0.05 and $\gamma \in [0.90, 1.10]$ with a step of 0.05 in both cases, which led us to $t_{sim} = 0.7$ and $\gamma = 1.05$ . QSB yielded 310k sentences with evaluative markers. + +# A.3 Topic modeling + +We used scikit-learn implementation of LDA. $^{13}$ We adapt the code from the original papers for BTM (Cheng et al., 2014), ABAE (He et al., 2017), and CTM (Bianchi et al., 2021). The neural-based architectures in ABAE and CTM have 100,020 and 19,794,280, respectively. We trained ABAE for 5 epochs and CTM for 20 epochs, with average running times of 71.4 and 590.8 minutes over 10 different runs. + +We used the same preprocessing for all topic models that we experimented with. Specifically, we removed punctuation and lowercase the text. We also eliminated stop words, URLs, emails, digits, and currency symbols, after which we lemmatized the tokens. We repeated each experiment 10 times with different seeds. We conducted a grid search with the number of topics as a hyperparameter in the [5, 100] range with step 5. For the rest of the hyperparameters, we used default values. + +# A.4 Computing infrastructure + +We conducted our experiments on $2 \times$ AMD Ryzen Threadripper 3970X 32-Core Processors and $2 \times$ NVIDIA GeForce RTX 3090 GPU's with 24GB of RAM. We used PyTorch version 1.9.0 and CUDA 11.4. + +# B Additional analysis + +In Figure 2, there are cases where certain evaluative topics show significant correlation with several facets from the same trait. For instance, the topic social issues correlates positively and significantly with anger, immoderation, and anxiety facets from the neuroticism trait. On the other hand, there are a number of cases where the correlation between facets from the same trait and a topic is of a different sign. This corroborates our hypothesis that topics align better with facets than with traits. + +![](images/48c7686873efa887e0cb8c4514e528cf32b882440521ab56cdfaf3fed3259143.jpg) +Figure 5: Facet projection onto the first canonical variate. The values on the y-axis represent the canonical loadings of facets in the first canonical dimension. + +![](images/72c0b1ec75e91d6bf5677800347e559422462d8bd21487c2c570404c092d696f.jpg) +Figure 6: Facet projection onto the second canonical variate. The second dimension provides a perfect separation of conscientiousness and neuroticism, with a mixture of facets from the remaining traits. \ No newline at end of file diff --git a/youarewhatyoutalkaboutinducingevaluativetopicsforpersonalityanalysis/images.zip b/youarewhatyoutalkaboutinducingevaluativetopicsforpersonalityanalysis/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..fd7bc141e09a1df83cf8e23b152162d412f4c033 --- /dev/null +++ b/youarewhatyoutalkaboutinducingevaluativetopicsforpersonalityanalysis/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0d235da08a151facf40dbf88cf333832aabcf580199404b32ef5957bfc68611b +size 500256 diff --git a/youarewhatyoutalkaboutinducingevaluativetopicsforpersonalityanalysis/layout.json b/youarewhatyoutalkaboutinducingevaluativetopicsforpersonalityanalysis/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..878e705f688889672a1c2721c40dc957c2e5b3a9 --- /dev/null +++ b/youarewhatyoutalkaboutinducingevaluativetopicsforpersonalityanalysis/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1116ded2be33d1c8d06ce47bd3b5c1b5f9c62793a4c3868a59c22d1ad6087d42 +size 350666 diff --git a/youcantpickyourneighborsorcanyouwhenandhowtorelyonretrievalintheknnlm/5b26c4d0-76cd-4dd5-848b-d4f02e45a402_content_list.json b/youcantpickyourneighborsorcanyouwhenandhowtorelyonretrievalintheknnlm/5b26c4d0-76cd-4dd5-848b-d4f02e45a402_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..88bbf4fb30a790efddf653cf9423960a6aef1500 --- /dev/null +++ b/youcantpickyourneighborsorcanyouwhenandhowtorelyonretrievalintheknnlm/5b26c4d0-76cd-4dd5-848b-d4f02e45a402_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f4d364607c2a324b8f587233ddee307409d13c1ddefb8bc7009129d7fcd028c5 +size 77803 diff --git a/youcantpickyourneighborsorcanyouwhenandhowtorelyonretrievalintheknnlm/5b26c4d0-76cd-4dd5-848b-d4f02e45a402_model.json b/youcantpickyourneighborsorcanyouwhenandhowtorelyonretrievalintheknnlm/5b26c4d0-76cd-4dd5-848b-d4f02e45a402_model.json new file mode 100644 index 0000000000000000000000000000000000000000..17eef6522ba5bfe72df9a8ea8a348a219e6fe4f4 --- /dev/null +++ b/youcantpickyourneighborsorcanyouwhenandhowtorelyonretrievalintheknnlm/5b26c4d0-76cd-4dd5-848b-d4f02e45a402_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8729fe21cc87456d8c568c700b7372ff0186e618736c4d813c126df8ada87f5e +size 93247 diff --git a/youcantpickyourneighborsorcanyouwhenandhowtorelyonretrievalintheknnlm/5b26c4d0-76cd-4dd5-848b-d4f02e45a402_origin.pdf b/youcantpickyourneighborsorcanyouwhenandhowtorelyonretrievalintheknnlm/5b26c4d0-76cd-4dd5-848b-d4f02e45a402_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..2276476174237d7ecf49c12d2a7bf2fa2b096e7f --- /dev/null +++ b/youcantpickyourneighborsorcanyouwhenandhowtorelyonretrievalintheknnlm/5b26c4d0-76cd-4dd5-848b-d4f02e45a402_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b129ade25b15f91be1f5977ba684082655020e718288155396717bdd2496257c +size 452556 diff --git a/youcantpickyourneighborsorcanyouwhenandhowtorelyonretrievalintheknnlm/full.md b/youcantpickyourneighborsorcanyouwhenandhowtorelyonretrievalintheknnlm/full.md new file mode 100644 index 0000000000000000000000000000000000000000..15fdee9123a1246b157f6de4de8ce627744329eb --- /dev/null +++ b/youcantpickyourneighborsorcanyouwhenandhowtorelyonretrievalintheknnlm/full.md @@ -0,0 +1,285 @@ +# You can't pick your neighbors, or can you? When and how to rely on retrieval in the kNN-LM + +Andrew Drozdov*, Shufan Wang, Razieh Rahimi, Andrew McCallum, Hamed Zamani, and Mohit Iyyer + +Manning College of Information and Computer Sciences +University of Massachusetts Amherst + +# Abstract + +Retrieval-enhanced language models (LMs), which condition their predictions on text retrieved from large external datastores, have recently shown significant perplexity improvements compared to standard LMs. One such approach, the kNN-LM, interpolates any existing LM's predictions with the output of a $k$ -nearest neighbors model and requires no additional training. In this paper, we explore the importance of lexical and semantic matching in the context of items retrieved by kNN-LM. We find two trends: (1) the presence of large overlapping $n$ -grams between the datastore and evaluation set plays an important factor in strong performance, even when the datastore is derived from the training data; and (2) the $k$ -NN-LM is most beneficial when retrieved items have high semantic similarity with the query. Based on our analysis, we define a new formulation of the $k$ -NN-LM that uses retrieval quality to assign the interpolation coefficient. We empirically measure the effectiveness of our approach on two English language modeling datasets, Wikitext-103 and PG-19. Our re-formulation of the $k$ -NN-LM is beneficial in both cases, and leads to nearly $4\%$ improvement in perplexity on the Wikitext-103 test set. + +# 1 Introduction + +Recently, a new class of language models (LMs) that are augmented with retrieval capabilities have led to substantial improvements over standard neural LMs (Lewis et al., 2020; He et al., 2020; Yogatama et al., 2021; Borgeaud et al., 2021; Wu et al., 2022; Thoppilan et al., 2022, inter alia). Furthermore, LMs with retrieval warrant investigation as they provide benefits for many tasks (Zamani et al., 2022). These approaches generally involve a backbone neural LM that interacts with a retrieval component of varying complexity to find relevant documents. In this work, we analyze and improve + +![](images/cdc91e0febf1c5941f223b952b3ab842036355f70210b1d751cf5aef598e38e8.jpg) +Figure 1: We present an extension to $k$ -NN-LM that conditions the interpolation coefficient $(\lambda)$ on the semantic similarity of retrieved contexts. + +a specific and simple type of retrieval-enhanced language model, the kNN-LM originally proposed by Khandelwal et al. (2020). + +The $k\mathrm{NN}$ -LM is non-parametric — it works by retrieving instances from an external datastore at each decoding timestep, and it improves language model performance without requiring additional training. In essence, the $k\mathrm{NN}$ -LM interpolates a base LM's predicted probability distribution of the next word with a distribution formed by retrieving vectors similar to the current hidden state. $k\mathrm{NN}$ -LM includes two tunable hyperparameters: the number of items to retrieve $(k)$ and an interpolation coefficient $(\lambda)$ . The method's effectiveness depends crucially on source and size of the retrieval datastore: it is most effective when using a very large datastore with orders of magnitude more tokens than seen in the training corpus, but Khandelwal et al. (2020) also observe improvements with smaller datastores. + +Modern neural models have massive capacity to memorize their training data (Zhang et al., 2017). Nonetheless, simply using an LM's training corpus as the source for the datastore works well for kNN-LM, as test perplexity on the Wikitext-103 dataset decreases substantially from 18.65 to 16.12. However, it remains unclear how and why the kNN-LM achieves these improvements. Which types of tokens and contexts does it improve most on? As an effort to answer this question and motivate new + +more effective methods to enhance LMs with retrieval we analyze the kNN-LM's behavior with respect to parts of speech, semantic similarity between context and retrievals, and lexical overlap. + +Among others, our analysis reveals the $k\mathrm{NN}$ -LM is helpful beyond factual knowledge (i.e. proper nouns), and improves perplexity across many word types, so it would be difficult to extend $k\mathrm{NN}$ -LM using syntactic information alone. On the other hand, we find the performance of the $k\mathrm{NN}$ -LM highly correlates with lexical similarity between the context and retrieved items, although this is somewhat domain specific and does not fully explain its strong performance. Semantic similarity is nearly as accurate a predictor of $k\mathrm{NN}$ -LM performance as lexical similarity, making it a strong candidate to extend the $k\mathrm{NN}$ -LM. + +Based on our analysis, we devise a simple scheme to extend the kNN-LM following the intuition that when retrieval quality is high (measured by semantic similarity), then the model should rely more heavily on the kNN-based prediction. Since retrieval in the kNN-LM is latent, we use semantic similarity as a proxy to measure retrieval relevance. Concretely, our method is an adaptive version of kNN-LM that assigns the interpolation coefficient according to retrieval quality (see Figure 1). While it introduces new hyperparameters, we show that the additional hyperparameter tuning comes at negligible cost. Importantly, our empirical results demonstrate that our newly introduced re-formulation of kNN-LM is beneficial for both encylopedic text and book data, and leads to an improvement of nearly $4\%$ perplexity over the vanilla kNN-LM, measured on the English language modeling Wikitext-103 test set. Broadly, we hope our insights and methods helps to facilitate future development of retrieval-augmented LMs. + +# 2 Language Modeling with kNN-LM + +The $k$ -NN-LM improves over a base language model by explicitly memorizing the LM's training data. It stores exact sentences from the training data in its datastore that can be accessed during language model inference to produce a $k$ -nearest neighbor next word distribution that is interpolated with the base model's prediction. Interpolation is preferred for similar reasons as approximate matrix factorization in collaborative filtering — the universe of text patterns is sparse and lossless compression of the training data alone is not sufficient + +to model new patterns. In this section, we explain the specifics of the kNN-LM's inner workings in order to guide our analysis. + +# 2.1 General Approach + +The $k\mathrm{NN}$ -LM (Khandelwal et al., 2020) is a language model with a retrieval component. Like all language models, it predicts the word at time step $t$ conditioned on the history of words: $P(w_{t}|w_{0},w_{1},\ldots ,w_{t - 1})$ . Neural language models encode the history of words using a vector $h$ : $P(w_{t}|h_{t - 1})$ . What makes the $k\mathrm{NN}$ -LM novel is that it uses a pretrained language model to encode a collection of documents, and then retrieves documents from this collection based on vector similarity in order to improve its next word prediction. Notably, the retrieval is completely latent — no supervised ranking information is used and documents are retrieved using semantic similarity. + +The $k\mathrm{NN}$ -LM follows a particular way of encoding the collection of documents into a datastore. Consider document $x_{i}$ consisting of $n$ words. The $k\mathrm{NN}$ -LM encodes the first $n - 1$ words as a vector and this becomes the key of document $x_{i}$ , referred to as $k_{i}$ . The $n$ -th word is saved as the value $v_{i}$ . In practice, and since $k\mathrm{NN}$ -LM is used for language modeling, a sequence with $n$ words is recorded as $n - 1$ documents: for any $t\leq n$ , a document whose key is words $w_{1}$ to $w_{t - 1}$ and value is $w_{t}$ is built. + +After the datastore is built, the kNN-LM is evaluated on a dataset with $m$ words, predicting words from left-to-right. Retrieval in kNN-LM is done by measuring Euclidean distance $d(.,.)$ between vector encodings of the query $q_{j}$ (corresponding to the context of the $j$ -th word in the evaluation data) and the keys in the datastore. The values from retrieved documents define a new distribution of the next word: + +$$ +P _ {K N N} \left(w _ {t} \mid q _ {t}\right) \propto \sum_ {\left(k _ {i}, v _ {i}\right)} \mathbb {1} _ {w _ {t} = v _ {i}} \exp (- d \left(k _ {i}, q _ {t}\right)) \tag {1} +$$ + +The best performance typically involves mixing the original and $k\mathrm{NN}$ -based word distributions using a tunable hyperparameter $\lambda$ : + +$$ +P ^ {\prime} (w _ {t} | q _ {t}) = \lambda P _ {K N N} (w _ {t} | q _ {t}) + (1 - \lambda) P (w _ {t} | q _ {t}) +$$ + +The $\lambda$ is fixed, yet it would be beneficial if $\lambda$ was conditioned on a per-token basis. We present an approach along these lines in the next section. + +![](images/3ffcc95e766a5b19b72401bdeea4597ec4bb598a6464331f37b07b23cbe8b7c3.jpg) +Figure 2: Relative perplexity improvement of kNN-LM compared to the base language model measured on the Wikitext-103 validation set. Queries are bucketed by semantic similarity of the top retrieved item, which operates as a proxy for retrieval quality. + +# 3 Analysis: When is kNN-LM effective? + +In the original kNN-LM work, the authors made qualitative observations that the model generally helps for rare patterns, factual knowledge, and names (Khandelwal et al., 2020). In this section we perform automated analysis to more specifically understand when kNN-LM is beneficial, with the aim to uncover systematic behavior that can be leveraged to extend kNN-LM and improve its effectiveness at next word prediction. + +# 3.1 Semantic Similarity of Retrieved Items + +The kNN-LM encodes the context into a fixed-length query vector and uses this to retrieve semantically similar contexts from the datastore. A priori, it's difficult to know when retrieval will be helpful, but perhaps there is a higher chance for usefulness if the result closely matches the query. + +Figure 2 examines this intuition a posteriori on the Wikitext-103 validation set. We bucket queries according to their semantic similarity with their top retrieved item, then report the relative perplexity improvement of the kNN-LM over the base model separately for each bucket.1 The queries are sorted by the associated semantic similarity, then divided into 20 equally sized bucket. The first contains the $5\%$ that have the highest semantic similarity with their top retrieved item. The plot in Figure 2 clearly indicates that kNN-LM is most beneficial in the buckets with high semantic similarity, supporting the hypothesis that semantic similarity is a proxy for retrieval quality. + +
    DevDev-8TestTest-8
    Wikitext
    BaseLM17.9617.9618.6518.65
    kNN-LM16.0617.2816.1218.05
    Ours15.7217.2615.5018.03
    PG-19
    BaseLM60.8360.8350.9550.95
    kNN-LM52.4953.3443.9344.97
    Ours52.0853.0643.5844.78
    + +Table 1: Perplexity on Wikitext-103 and PG-19 datasets. Dev-8 and Test-8 contain the same data as Dev and Test, but overlapping $n$ -grams ( $n \geq 8$ ) with the evaluation data have been removed from the kNN-LM dataset. Our method (§4) uses retrieval quality to interpolate between kNN and base LMs. + +# 3.2 Lexical Overlap + +Another possible proxy for relevance is lexical overlap. Rather than assign queries to buckets using semantic similarity derived from neural network hidden states, we first convert contexts into TFIDF vectors (using 32-token trailing window), which are a popular and effective bag-of-words representation (Chen et al., 2017). We use the same neighbors as before, but now assign buckets using distance between TFIDF vectors. The relative perplexity for this setting is reported in Figure 2, and aligns well with what we saw using semantic similarity in the previous subsection. This suggests that kNN-LM is also beneficial when query contexts have high lexical overlap with the datastore contexts. + +To further examine the role of lexical matching in the performance of $k$ NN-LM, we rebuild the index used for retrieval in a way that minimizes lexical overlap. The keys are identical to before, but we ignore contexts that include large overlapping $n$ -grams ( $n \geq 8$ ) with the evaluation data. In Table 1, we compare the original with this new restricted datastore on Wikitext-103. Even with these lexically similar contexts removed, the $k$ NN-LM still provides some benefit (although severely diminished), so lexical similarity alone does not fully explain performance. + +![](images/fc32d3e5aa9bc3d96df65c07a5ed02db12c8a76950d278226b682d65699941e1.jpg) + +![](images/7fb605a23331596cde6fab84ad3efec2b587a7d27e55358701737418f5f54b73.jpg) +Figure 3: Perplexity of the base language model grouped by part-of-speech (top), and relative improvement of the kNN-LM (bottom). + +# 3.3 Part-of-Speech Tags + +Another lens, syntax, sheds light on kNN-LM performance outside of document relevance. To further understand which types of words benefit most from kNN-LM, we group tokens by their part-of-speech. Then we compute validation perplexity separately for each group using both the base language model and the kNN-LM. To get part-of-speech tags, we segment the data into sentences and label words using the tagger from Stanza3 with the universal dependencies output space. We include categories with frequency greater than 1K in the Wikitext-103 validation data. + +The results are included in Figure 3. We find that kNN-LM is most helpful for syntactic categories where the base language model most struggles, e.g. the original perplexity for adjectives (ADJ) is 105.37 and the kNN-LM improves perplexity by $16.3\%$ for this category. The five other categories that had worst perplexity (ADV, NOUN, NUM, PROPN, VERB) are also where kNN-LM works best. + +This analysis serves as a useful sanity check. The syntactic categories are often associated with factual knowledge tied to entity relations, but no single category dominates performance. Also, there is some benefit for every category, so it is not clear that any should be avoided. + +![](images/1ea5fece25589075e421df27ca7de57ffbb651dd4356f8d828ca835892cdc8a9.jpg) +Figure 4: Coefficient assignments $(\lambda_q)$ after tuning on the Wikitext-103 validation set for different numbers of buckets, $b \in \{1,2,8,32\}$ . + +# 4 A New Formulation for kNN-LM + +In the previous section, we analysed when $k$ NN-LM is most helpful. We use this information to design a new formulation of $k$ NN-LM that can exploit this behavior. The original $k$ NN-LM uses the same interpolation coefficient $(\lambda)$ for every example, which may not be desirable. As our analysis reveals, we can predict when the $k$ NN-LM is most beneficial, which naturally leads us to a new formulation with an adaptive $\lambda$ : + +$$ +P ^ {'} (w _ {t} |.) = \lambda_ {q} P _ {K N N} (w _ {t} |.) + (1 - \lambda_ {q}) P (w _ {t} |.) +$$ + +where $\lambda_{q}$ is a function of both the query and its retrieved documents rather than constant for all queries. This is highly similar to the formulation in He et al. (2021), except theirs ignore retrieved items when deciding the coefficient. + +Using the same $\lambda$ for all examples is limiting and does not leverage retrieval well if neighboring keys are clearly relevant (like shown in Figure 1). Of course, the critical decision here is how to map semantic similarity to an appropriate value for the coefficient. We find it convenient and effective to use a piecewise function based on semantic similarity, following the bucketing described in §3.1. We use the validation data for tuning, sorting by semantic similarity with the topic retrieved item then dividing all the queries into $b$ equally sized buckets. For each bucket we perform the same hyperparameter search over coefficients as in kNN-LM. + +Example coefficient assignments for different numbers of buckets $(b)$ are shown in Figure 4. + +# 5 Experiments and Results + +To measure the importance of retrieval quality in the $k$ -NN-LM, we evaluate our approach (§4) on two English language modeling datasets. The first is the Wikitext-103 corpus (Merit et al., 2016) used by Khandelwal et al. (2020). The second is PG-19 (Rae et al., 2020), which we include because it consists of books and is thematically distinct from the encyclopedic documents in Wikitext-103. + +# 5.1 Experimental Setup and Pre-processing + +Wikitext-103 The data is split 103M/217K/245K tokens for training, validation, and test. We use the pretrained model from Khandelwal et al. (2020), and associated 267K word-level vocab. + +PG-19 To understand when adapting the coefficient to retrieval quality is desirable compared with a static coefficient, we include PG-19 in our experiments. PG-19 consists of books and is thematically distinct from the encyclopedic douments in the Wikitext-103 data. We sample 2,000 books from the training corpus, which gives approximately 150M tokens and is close in size to Wikitext-103. We use the standard validation split (50 books) and test split (100 books). We use word-level tokenization with a 300K vocabulary derived from our constructed training split. We train our own model using the same architecture and hyperparameters from Khandelwal et al. (2020). + +Baselines We choose these baselines to isolate the effect of retrieval quality on the performance of the $k\mathrm{NN}$ -LM: the self-attentive adaptive input representation from Baevski and Auli (2019) as the base model, the original $k\mathrm{NN}$ -LM (Khandelwal et al., 2020), and the continuous cache model (Grave et al., 2017) which retrieves from both the datastore and local context. As described in §2.1, the datastore is built by encoding a large text corpus, in this case the training set. Although we use approximate neighbors, we compute the next word probability with exact distance as this substantially boosts performance (Khandelwal et al., 2020).5 + +# 5.2 Tuning kNN-LM Hyperparameters + +For the original formulation of $k\mathrm{NN}$ -LM there are two hyperparameters to tune: the number of items to retrieve $(k)$ and the interpolation coefficient $(\lambda)$ + +
    bDev0Dev1Dev
    117.09114.98916.091
    216.90914.85415.933
    416.76314.76715.815
    816.66514.72715.743
    1616.63714.72215.727
    3216.62914.72215.721
    6416.62214.72415.719
    12816.61914.72415.715
    + +Table 2: Validation perplexity on Wikitext-103. Used for hyperparameter tuning. + +These are tuned on the validation set. We introduce an important hyperparameter for the number of buckets to use $(b)$ and tune a new interpolation coefficient $(\lambda_q)$ separately for each bucket. Since each bucket is assigned its own coefficient, the total number of hyperparameters grows with the number of buckets. Even so, our approach has about the same speed as the original kNN-LM both for parameter tuning and during inference. We make hyperparameter tuning efficient by caching expensive computation (see §5.2.1 for more details). At test time, selecting the coefficient is an $O(1)$ lookup based on the semantic similarity of the top neighbor. + +To select the number of buckets $(b)$ , we use the first half of the validation data $(\mathrm{Dev}_0)$ to define partition boundaries, and find the best performing interpolation coefficient for each partition separately. Then we measure perplexity on the second half of the validation data $(\mathrm{Dev}_1)$ using those partition boundaries and coefficients. The choice of $b$ that gives the best perplexity on $\mathrm{Dev}_1$ is the one we ultimately use. With $b$ chosen, we then re-compute the partition boundaries and corresponding coefficients using the full validation data (Dev), which is used to evaluate against the test data. + +An example of tuning for $b$ on Wikitext-103 is shown in Table 2. Increasing $b$ always leads to better perplexity on $\mathrm{Dev}_0$ , albeit with diminishing returns. Since the partition boundaries and coefficients are chosen using $\mathrm{Dev}_0$ , it is not guaranteed that increasing $b$ improves perplexity on the held-out data $(\mathrm{Dev}_1)$ . Although, tuning the partition boundaries and coefficients on the validation data does not guarantee improvement on the test data, in our experiments we find our adaptive coefficient is always as effective as the original static one. + +
    λbkDevTest
    Base LM---17.9618.65
    kNN-LM0.251102416.0616.12
    +CCache0.251102415.8115.79
    Ours (TFIDF)λq32102415.7615.54
    Oursλq32102415.7215.50
    + +Table 3: Test and validation perplexity on Wikitext-103. This is our main result and demonstrates that our new formulation with adaptive coefficient $(\lambda_q)$ substantially improves over $k$ NN-LM. + +# 5.2.1 Computational Cost of Tuning + +Our approach is nearly the same speed as the original kNN-LM both at test time and for hyperparameter tuning. This is the case even though our hyperparameter count scales with $b$ and is more than an order of magnitude more than what is used for the kNN-LM. We accomplish this by effectively caching query vectors, retrieved items, and associated vector distances. The initial time to compute these values takes hours and is the same as with kNN-LM, but after computed it takes less than 5 minutes to perform the hyperparameter search for the adaptive coefficient on the Wikitext-103 data. Our implementation with caching is available here: github.com/iesl/knnlm-retrieval-quality. + +# 5.3 Perplexity on WikiText-103 + +Table 3 reports the perplexity from our approach and various baselines on the Wikitext-103 validation and test sets. Our approach scores 15.50 perplexity on the test set. This is a $16.9\%$ improvement over the base language model and a $3.8\%$ improvement over the original kNN-LM formulation. + +For the number of buckets $(b)$ we found 32 to work best (see Table 2), and the set of coefficients are the same as shown in Figure 4. Our search space includes $b \in \{1, 2, 4, 8, 16, 32, 64, 128\}$ and $\lambda_q \in \{0.05, 0.1, 0.15, \dots, 0.9, 0.95\}$ . + +Khandelwal et al. (2020) find that retrieving from recent history using the continuous cache model (CCache; Grave et al. 2017) is complementary to retrieving from the datastore, improving perplexity when combined with $k$ -NN-LM. This type of caching is out of scope of this paper, and our approach already outperforms their combined model. + +# 5.4 Perplexity on PG-19 + +To further understand how lexical overlap influences kNN-LM performance we evaluate using the PG-19 dataset. Compared to Wikipedia, text across books has much less repetition, so text retrieved from the datastore is less likely to overlap with $n$ -grams in the evaluation data. + +We train our own model using the same architecture and hyperparams for Wikitext-103, and report perplexity in Table 1. We found $b = 32$ works best. Despite the challenging properties of the book data, kNN-LM is still effective. Our re-formulation is marginally beneficial here. + +# 5.5 Filtering $n$ -grams from the Datastore + +Thus far, our analysis indicates that lexical overlap is important for strong kNN-LM performance. To test this directly for our adaptive coefficient, we follow the procedure described in §3.2 to rebuild the datastore but remove from the index large $n$ -grams ( $n \geq 8$ ) and their surrounding tokens that also appear in the evaluation data. + +The results for this experiment on both Wikitext-103 and PG-19 are shown in Table 1. Most of $k\mathrm{NN}$ LM's improvements on Wikitext-103 come from retrieving contexts with overlapping $n$ -grams, which could motivate simpler and faster retrieval functions. On the other hand, the cases in which $n$ -gram overlap does not play a major role require further investigation. + +# 6 Discussion + +In previous sections we use observations of $k$ NNLM to motivate our new approach that adapts the interpolation coefficient to retrieval quality. Here we analyze results with our new method to see how they compare with baselines and deepen our understanding of retrieval-enhanced language modeling. + +# 6.1 Can we adapt to lexical similarity? + +The original kNN-LM has similar performance when its results are stratified by either semantic or lexical similarity (§3.1), but in our new formulation we adaptive the coefficient only according to semantic similarity. What if we use lexical simi + +
    λbkDev
    Dense0.251102416.06
    Denseλq32102415.72
    TFIDFλq32102415.76
    Dense0.051117.10
    Dense0.151816.66
    Dense0.2516416.31
    Denseλq16116.63
    Denseλq128816.19
    Denseλq166415.90
    TFIDFλq32116.38
    TFIDFλq64816.06
    TFIDFλq166415.87
    + +Table 4: Validation perplexity on Wikitext-103 used for ablation analysis. The $k$ NN-LM uses a single static value for the interpolation coefficient $\left( \lambda \right)$ ,our method uses an adaptive coefficient $\left( {\lambda }_{q}\right)$ . This table includes our approach when using the semantic similarity (Dense) or bag-of-words representation (TFIDF). Based on how many items are retrieved(k),our approach works best with a different amount of buckets(b). + +larity instead? We explore this possible alternative and report the results for Wikitext-103 in Table 4. + +In general, we find that both semantic and lexical similarity $^{8}$ yield similar results when used to bucket queries. For the best setting, when $k = 1024$ , the learned vectors work better, reflecting recent findings that dense vectors outperform sparse representations for various retrieval-related tasks (Lee et al., 2019; Gao et al., 2021). Hence, throughout this paper we adapt the coefficient using semantic similarity and $k = 1024$ unless otherwise specified. Interestingly, for lower values of $k$ the bag-of-words representation has an edge over semantic similarity. Perhaps this suggests lexical similarity is more precise, and if retrieving many items is costly, then adapting the coefficient according to lexical similarity might be particularly helpful. + +# 6.2 Do syntactic trends hold across domains? + +We repeat the syntactic analysis from §3.3 using our adaptive coefficient and include PG-19 as an additional dataset. $^{9}$ The corresponding plots are shown in Figure 5. + +![](images/96a35b129eb7a97331bd59bbd7079b6513d50f193c093cade405ab36274e943c.jpg) + +![](images/f0bea8a5463b40783eee90804044b8e22300a7cc93437330129c2a3cd902ed2b.jpg) + +![](images/620f5763425ccd548fd94b2f15c7b970f07db7188c90b33b047d56f80108c8c7.jpg) +Figure 5: Perplexity of the base language model (top), grouped by part-of-speech. Relative perplexity improvement by kNN-LM approaches on Wikitext-103 (center) and PG-19 (bottom). The lines corresponding kNN-LM match Figure 3 — they are included here to emphasize the difference to our new formulation. + +In both domains, the base model has a similar pattern of perplexity for part-of-speech tags, but there are some differences when comparing $k$ NN-LM across domains. For instance, $k$ NN-LM is especially helpful for adjectives in wikipedia text, but much less so for the book data. It's satisfying to see our new formulation of the $k$ NN-LM has a similar impact in many cases for both domains, e.g. improving performance on adjectives nearly $5\%$ despite the aforementioned differences. Also, our formulation and $k$ NN-LM provide consistent benefits even in the relatively more challenging book domain. Besides being potentially stylistically and syntactically distinct, we imagine encyclopedic text has more repetition than book data, which would likely influence the amount of lexical overlap between the train and evaluation data. We explore the effect of deliberately limiting lexical overlap in the next subsection, providing insights for the different cases when retrieval is helpful. + +
    BookContextr
    The Unbearable Bassington, Saki (1912)My dear Francesca, he said soothingly, laying his hand affectionatelyq
    FLORA, A.L.O.E. (1860)My dear madam, said Mr. Ward earnestly, laying his hand on1
    Peter, Smith (1908)this young man's uncle, said Peter, laying his hand affectionately11
    Life of Napoleon Bonaparte, Sloane (1896)during the worst periods of terror, were thronged from pit to galleryq
    Sketches of Reforms—, Stanton (1849)For weeks, that theater was crowded from pit to dome1
    Farquharson of Glune, Bateman (1908)The storm of feeling swept alike from stall to gallery6
    Walking, Thoreau (1851)like a dream of the Middle Ages. I floated down its historic streamq
    The Automobilist Abroad, Mansfield (1907)France is a pleasure, a voyage up a picturesque and historic French1
    Canadian Notabilities, Dent (1880)two small sailing craft slowly making their way up the majestic stream42
    + +Table 5: Examples from PG-19 where relevant contexts are found even with large $n$ -grams removed from the datastore. There can be overlap in small $n$ -grams (top), local structure (center), or semantics (bottom). The contexts are shown with their corresponding book. Rank $(r)$ is shown except for queries $(q)$ . Values are bolded or italicized. + +# 6.3 What use is the restricted datastore? + +As we established in §3.2, the lexical overlap between a query and a retrieved context is a reasonable proxy for relevance. In Table 1, we report the perplexity of our adaptive coefficient when ignoring large $n$ -grams that overlap with the evaluation data when building the index, yielding a restricted less effective datastore. With these highly relevant contexts removed, we observe that the kNN-LM shows substantially worse test perplexity on Wikitext-103, 18.05 instead of 16.12. PG-19 exhibits different behavior, and the change in perplexity is minimal. This suggests that kNN-LM can be helpful even when there are not large overlapping $n$ -grams between the datastore and evaluation corpus — such cases occur frequently in PG-19, and we visualize examples in Table 5. + +With the restricted datastore, the benefit from adapting the coefficient is substantially diminished for Wikitext-103, but less so for PG-19. This suggests the partitions capture qualities besides lexical similarity. Alternatively, it could be that short $n$ -grams are helpful in Wikitext-103, despite Khandelwal et al. (2020) reporting that interpolating the base language model with an $n$ -gram model was not very effective. + +It is worth noting that even when contexts with high lexical overlap are removed from the DATstore, adapting the coefficient is robust and provides performance at least on par with $k$ NN-LM in the same setting. While $k$ NN-LM is weakened here, it does improve over the base language model. In future work, it could prove fruitful to explore alternate strategies besides semantic or lexical similarity. + +# 7 Related Work + +We extend the $k\mathrm{NN}$ -LM by adapting the interpolation coefficient to retrieval quality (measured by semantic similarity). AdaptRet (He et al., 2021) models the interpolation coefficient as a function of the query. This is convenient, since one can skip retrieval if the coefficient is below a threshold, although requires training a separate adaptor network. Crucially, their coefficient predictions are based solely on query features, and does not take into account whether retrieval is successful. Our approach incorporates the quality of retrieval, and improves language modeling results. It is simple and effective, and only needs lightweight hyperparameter tuning without any additional training. + +RetoMaton (Alon et al., 2022) provides an alternative means to bypass retrieval. They build a graph over the datastore, and at each time step they either retrieve like the original kNN-LM or re-use the previously retrieved neighbors to traverse the graph. This is more efficient than AdaptRet, providing better results at lower cost. Both AdaptRet and RetoMaton are designed with efficiency in mind. They rely on approximate distance using product quantization and perform about as well as the exact distance version of the kNN-LM. We improve upon kNN-LM by about $4\%$ perplexity. + +There are many recent works that use retrieval components for language tasks besides language modeling, such as question answering (Godbole et al., 2019; Guu et al., 2020; Kassner and Schütze, 2020), dialogue generation (Fan et al., 2021), conversational search (Hashemi et al., 2020), semantic parsing (Gupta et al., 2021), data augmentation (Du et al., 2021), and machine translation (Khan- + +delwal et al., 2021; Zheng et al., 2021; Martins et al., 2022). + +There are alternatives to $k\mathrm{NN}$ -LM that incorporate document structure (Xu et al., 2022), but their experimental setup is not comparable with ours. In our baselines we only consider models matching the original $k\mathrm{NN}$ -LM backbone, although alternative architectures show promise for retrieval-enhanced language modeling (Yogatama et al., 2021; Meng et al., 2022; Zhong et al., 2022). Scaling the datastore (Borgeaud et al., 2021) or model size (Shoeybi et al., 2019) have shown to effectively improve language modeling. Alternatively, text generation may be improved through more advanced ranking (Min et al., 2021) or decoding (Krishna et al., 2022) algorithms. + +Researchers have explored fundamental extensions to $k$ NN that are agnostic to language data. Wettschereck and Dietterich (1993) spatially partition the datastore, adapting the value of $k$ for each region. Keeping $k$ fixed, Hastie and Tibshirani (1995) instead adapt the shape of the neighborhood based on local information. + +# 8 Conclusion + +In this paper, we have proposed a novel and effective re-formulation of the $k\mathrm{NN}$ -LM. Our approach adapts the interpolation coefficient to the quality of retrieved documents measured by semantic similarity. We motivate our approach through extensive analysis, which also provides insights on the types of tokens and contexts $k\mathrm{NN}$ -LM is most helpful for. Importantly, we empirically demonstrate the effectiveness of our approach through experiments on two domains, Wikitext-103 (encyclopedic text) and PG-19 (book data), and outperform the original $k\mathrm{NN}$ -LM by $4 \%$ test perplexity on the Wikitext-103 language modeling corpus. + +# Limitations + +The kNN-LM leverages a datastore, and when populated with text relevant for the task domain, can be used to improve language modeling performance. The benefits of this procedure are data dependent and domain-specific, and the same applies to the adaptive coefficient technique that we introduce. + +The adaptive coefficient requires many more tunable hyperparameters. To address this, we release an optimized codebase to perform this hyperparameter search in negligible time compared with the original kNN-LM. + +# Ethical Concerns and Impact + +Even when used with the best intentions language models can produce malicious or harmful text, and guards are typically used to account for inherent bias or undesirable output. In our case, we do not generate text and simply use the model to evaluate perplexity on existing data, so effectiveness of safety guards and their limitations is not a relevant concern in this work. + +# Acknowledgements + +We are grateful to Fernando Diaz, Urvashi Khandelwal, Kalpesh Krishna, Simeng Sun, the UMass NLP group and IESL for several useful discussions during the course of the project. This work was supported in part by the Center for Intelligent Information Retrieval and the Center for Data Science; in part by the IBM Research AI through the AI Horizons Network; in part by the Chan Zuckerberg Initiative under the project Scientific Knowledge Base Construction; in part by the National Science Foundation (NSF) grant numbers IIS-1922090, IIS-1955567, IIS-1763618, and IIS-2106391; in part by the Defense Advanced Research Projects Agency (DARPA) via Contract No. FA8750-17-C-0106 under Subaward No. 89341790 from the University of Southern California; and in part by the Office of Naval Research (ONR) via Contract No. N660011924032 under Subaward No. 123875727 from the University of Southern California. Any opinions, findings and conclusions or recommendations expressed in this material are of the authors and do not necessarily reflect those of the sponsor. + +# References + +Uri Alon, Frank Xu, Junxian He, Sudipta Sengupta, Dan Roth, and Graham Neubig. 2022. Neuro-symbolic language modeling with automaton-augmented retrieval. In International Conference on Machine Learning, pages 468-485. PMLR. +Alexei Baevski and Michael Auli. 2019. Adaptive input representations for neural language modeling. In International Conference on Learning Representations. +Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George van den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, Diego de Las Casas, Aurelia Guy, Jacob Menick, Roman Ring, T. W. Hennigan, Saffron Huang, Lorenzo Maggiore, Chris Jones, Albin Cassirer, Andy Brock, Michela Paganini, Geoffrey Irving, Oriol Vinyals, Simon Osindero, Karen + +Simonyan, Jack W. Rae, Erich Elsen, and L. Sifre. 2021. Improving language models by retrieving from trillions of tokens. In ICML. +Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877-1901. Curran Associates, Inc. +Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer open-domain questions. In Association for Computational Linguistics (ACL). +Jingfei Du, Edouard Grave, Belize Gunel, Vishrav Chaudhary, Onur Celebi, Michael Auli, Ves Stoyanov, and Alexis Conneau. 2021. Self-training improves pre-training for natural language understanding. In NAACL. +Angela Fan, Claire Gardent, Chloe Braud, and Antoine Bordes. 2021. Augmenting transformers with knn-based composite memory for dialog. Transactions of the Association for Computational Linguistics, 9:82-99. +Luyu Gao, Zhuyun Dai, and Jamie Callan. 2021. COIL: Revisit exact lexical match in information retrieval with contextualized inverted list. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3030-3042, Online. Association for Computational Linguistics. +Ameya Godbole, Dilip Chakravarthy Kavarthapu, Rajarshi Das, Zhiyu Gong, Abhishek Singhal, Hamed Zamani, Mo Yu, Tian Gao, Xiaoxiao Guo, Manzil Zaheer, and Andrew McCallum. 2019. Multi-step entity-centric information retrieval for multi-hop question answering. ArXiv, abs/1909.07598. +Edouard Grave, Armand Joulin, and Nicolas Usunier. 2017. Improving neural language models with a continuous cache. In International Conference on Learning Representations. +Vivek Gupta, Akshit Shrivastava, Adithya Sagar, Armen Aghajanyan, and Denis Savenkov. 2021. Retronlu: Retrieval augmented task-oriented semantic parsing. ArXiv, abs/2109.10410. +Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Ming-Wei Chang. 2020. Realm: Retrievalaugmented language model pre-training. ArXiv, abs/2002.08909. + +Helia Hashemi, Hamed Zamani, and W. Bruce Croft. 2020. Guided transformer: Leveraging multiple external sources for representation learning in conversational search. Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. +Trevor Hastie and Robert Tibshirani. 1995. Discriminant adaptive nearest neighbor classification and regression. In Advances in Neural Information Processing Systems, volume 8. MIT Press. +Junxian He, Taylor Berg-Kirkpatrick, and Graham Neubig. 2020. Learning sparse prototypes for text generation. In NeurIPS. +Junxian He, Graham Neubig, and Taylor Berg-Kirkpatrick. 2021. Efficient nearest neighbor language models. In EMNLP. +Nikhil Kandpal, Eric Wallace, and Colin Raffel. 2022. Deduplicating training data mitigates privacy risks in language models. In Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 10697-10707. PMLR. +Nora Kassner and Hinrich Schütze. 2020. BERT-kNN: Adding a kNN search component to pretrained language models for better QA. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, pages 3424-3430, Online. Association for Computational Linguistics. +Urvashi Khandelwal, Angela Fan, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2021. Nearest neighbor machine translation. In International Conference on Learning Representations. +Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2020. Generalization through Memorization: Nearest Neighbor Language Models. In International Conference on Learning Representations (ICLR). +Kalpesh Krishna, Yapei Chang, John Wieting, and Mohit Iyyer. 2022. Rankgen: Improving text generation with large ranking models. In Empirical Methods in Natural Language Processing. +Katherine Lee, Daphne Ippolito, Andrew Nystrom, Chiyuan Zhang, Douglas Eck, Chris Callison-Burch, and Nicholas Carlini. 2022. Deduplicating training data makes language models better. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8424-8445, Dublin, Ireland. Association for Computational Linguistics. +Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6086–6096, Florence, Italy. Association for Computational Linguistics. + +Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kuttler, Mike Lewis, Wen tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. 2020. Retrieval-augmented generation for knowledge-intensive nlp tasks. In NeurIPS. +Pedro Henrique Martins, Zita Marinho, and Andre F. T. Martins. 2022. Chunk-based nearest neighbor machine translation. ArXiv, abs/2205.12230. +R. Thomas McCoy, Paul Smolensky, Tal Linzen, Jianfeng Gao, and Asli Celikyilmaz. 2021. How much do language models copy from their training data? evaluating linguistic novelty in text generation using raven. ArXiv, abs/2111.09509. +Yuxian Meng, Shi Zong, Xiaoya Li, Xiaofei Sun, Tianwei Zhang, Fei Wu, and Jiwei Li. 2022. GNN-LM: Language modeling based on global contexts via GNN. In International Conference on Learning Representations. +Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2016. Pointer sentinel mixture models. +Sewon Min, Kenton Lee, Ming-Wei Chang, Kristina Toutanova, and Hannaneh Hajishirzi. 2021. Joint passage ranking for diverse multi-answer retrieval. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6997-7008, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. +Jack W. Rae, Anna Potapenko, Siddhant M. Jayakumar, Chloe Hillier, and Timothy P. Lillicrap. 2020. Compressive transformers for long-range sequence modelling. In International Conference on Learning Representations. +Alexandra Schofield, Laure Thompson, and David Mimno. 2017. Quantifying the effects of text duplication on semantic models. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2737-2747, Copenhagen, Denmark. Association for Computational Linguistics. +Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro. 2019. Megatron-LM: Training multi-billion parameter language models using model parallelism. ArXiv, abs/1909.08053. +Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam M. Shazeer, Apoorv Kulshreshtha, HengTze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, Yaguang Li, Hongrae Lee, Huaixiu Zheng, Amin Ghafouri, Marcelo Menegali, Yanping Huang, Maxim Krikun, Dmitry Lepikhin, James Qin, Dehao Chen, Yuanzhong Xu, Zhifeng Chen, Adam Roberts, Maarten Bosma, Yanqi Zhou, Chung-Ching Chang, I. A. Krivokon, Willard James Rusch, Marc Pickett, Kathleen S. Meier-Hellstern, Meredith Ringel + +Morris, Tulsee Doshi, Renelito Delos Santos, Toju Duke, Johnny Hartz Søraker, Ben Zevenbergen, Vinodkumar Prabhakaran, Mark Diaz, Ben Hutchinson, Kristen Olson, Alejandra Molina, Erin Hoffman-John, Josh Lee, Lora Aroyo, Ravindran Rajakumar, Alena Butryna, Matthew Lamm, V. O. Kuzmina, Joseph Fenton, Aaron Cohen, Rachel Bernstein, Ray Kurzweil, Blaise Aguera-Arcas, Claire Cui, Marian Croak, Ed Chi, and Quoc Le. 2022. Lamda: Language models for dialog applications. ArXiv, abs/2201.08239. +Dietrich Wettschereck and Thomas Dietterich. 1993. Locally adaptive nearest neighbor algorithms. In Advances in Neural Information Processing Systems, volume 6. Morgan-Kaufmann. +Yuhuai Wu, Markus N. Rabe, DeLesley S. Hutchins, and Christian Szegedy. 2022. Memorizing transformers. In ICLR. +Frank F. Xu, Junxian He, Graham Neubig, and Vincent J. Hellendoorn. 2022. Capturing structural locality in non-parametric language models. In ICLR. +Dani Yogatama, Cyprien de Masson d'Autume, and Lingpeng Kong. 2021. Adaptive semiparametric language models. Transactions of the Association for Computational Linguistics, 9:362-373. +Hamed Zamani, Fernando Diaz, Mostafa Dehghani, Donald Metzler, and Michael Bendersky. 2022. Retrieval-enhanced machine learning. In SIGIR '22. +Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. 2017. Understanding deep learning requires rethinking generalization. In ICLR. +Xin Zheng, Zhirui Zhang, Junliang Guo, Shujian Huang, Boxing Chen, Weihua Luo, and Jiajun Chen. 2021. Adaptive nearest neighbor machine translation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 368-374, Online. Association for Computational Linguistics. +Zexuan Zhong, Tao Lei, and Danqi Chen. 2022. Training language models with memory augmentation. In Empirical Methods in Natural Language Processing (EMNLP). \ No newline at end of file diff --git a/youcantpickyourneighborsorcanyouwhenandhowtorelyonretrievalintheknnlm/images.zip b/youcantpickyourneighborsorcanyouwhenandhowtorelyonretrievalintheknnlm/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..b5a37af3d824313a4fb616a7b3ba764b23fe30a9 --- /dev/null +++ b/youcantpickyourneighborsorcanyouwhenandhowtorelyonretrievalintheknnlm/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2aaf8ee641f02708059ca8de5b91d174d0a2d55bf753172a85d991f17db90fa8 +size 393675 diff --git a/youcantpickyourneighborsorcanyouwhenandhowtorelyonretrievalintheknnlm/layout.json b/youcantpickyourneighborsorcanyouwhenandhowtorelyonretrievalintheknnlm/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..1c1df79d82f705f15c0e19389ed786f03be373e4 --- /dev/null +++ b/youcantpickyourneighborsorcanyouwhenandhowtorelyonretrievalintheknnlm/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b714c68cd60a0aac43a042b47114aaf7fb57f6dcf712d850c6ece3191cdd8301 +size 410844 diff --git a/youtrulyunderstandwhatineedintellectualandfriendlydialogagentsgroundingpersonaandknowledge/7cd9ce4a-3834-4a7e-85ac-87bf2af837a5_content_list.json b/youtrulyunderstandwhatineedintellectualandfriendlydialogagentsgroundingpersonaandknowledge/7cd9ce4a-3834-4a7e-85ac-87bf2af837a5_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..271ca6879c3c25fd58b4495823a8afec321343bc --- /dev/null +++ b/youtrulyunderstandwhatineedintellectualandfriendlydialogagentsgroundingpersonaandknowledge/7cd9ce4a-3834-4a7e-85ac-87bf2af837a5_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bf1f08c981c69d55e9bfa5038d9412576a24b67423cc16f49d7bfb555dc28256 +size 97963 diff --git a/youtrulyunderstandwhatineedintellectualandfriendlydialogagentsgroundingpersonaandknowledge/7cd9ce4a-3834-4a7e-85ac-87bf2af837a5_model.json b/youtrulyunderstandwhatineedintellectualandfriendlydialogagentsgroundingpersonaandknowledge/7cd9ce4a-3834-4a7e-85ac-87bf2af837a5_model.json new file mode 100644 index 0000000000000000000000000000000000000000..237e0dbefd85c38eee0512cd2dbf383a5c34db3d --- /dev/null +++ b/youtrulyunderstandwhatineedintellectualandfriendlydialogagentsgroundingpersonaandknowledge/7cd9ce4a-3834-4a7e-85ac-87bf2af837a5_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:19c919a2feb63a5150fc7107009ad16a8f8406c02d850660c69fff22eb738000 +size 118315 diff --git a/youtrulyunderstandwhatineedintellectualandfriendlydialogagentsgroundingpersonaandknowledge/7cd9ce4a-3834-4a7e-85ac-87bf2af837a5_origin.pdf b/youtrulyunderstandwhatineedintellectualandfriendlydialogagentsgroundingpersonaandknowledge/7cd9ce4a-3834-4a7e-85ac-87bf2af837a5_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..6e90226ee5c790652b4c72dfce932dc50cf29eb0 --- /dev/null +++ b/youtrulyunderstandwhatineedintellectualandfriendlydialogagentsgroundingpersonaandknowledge/7cd9ce4a-3834-4a7e-85ac-87bf2af837a5_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b8589467db89b8197bbf94c8ce4aaf8d2a068fbb0ca533719db7aa1ff631c8dc +size 2538361 diff --git a/youtrulyunderstandwhatineedintellectualandfriendlydialogagentsgroundingpersonaandknowledge/full.md b/youtrulyunderstandwhatineedintellectualandfriendlydialogagentsgroundingpersonaandknowledge/full.md new file mode 100644 index 0000000000000000000000000000000000000000..b0b86f6be7cb03b6e432078736c3ea580d74a35c --- /dev/null +++ b/youtrulyunderstandwhatineedintellectualandfriendlydialogagentsgroundingpersonaandknowledge/full.md @@ -0,0 +1,446 @@ +# You Truly Understand What I Need : Intellectual and Friendly Dialogue Agents grounding Knowledge and Persona + +Jungwoo Lim $^{1}$ , Myunghoon Kang $^{1*}$ , Yuna Hur $^{1*}$ , Seungwon Jung $^{1*}$ , Jinsung Kim $^{1*}$ , Yoonna Jang $^{1}$ , Dongyub Lee $^{3}$ , Hyesung Ji $^{2}$ , Donghoon Shin $^{2}$ , Seungryong Kim $^{1§}$ and Heuseok Lim $^{1§}$ + +$^{1}$ Korea University, $^{2}$ Dialogue Tech Division, NCSOFT, $^{3}$ Naver Corporation + +{wjddn803,chaos8527,yj72722,redlion0929,jin62304,seungryong_kim,limhseok}@korea.ac.kr, {hyesung84,dhshin}@ncsoft.com, dongyub.lee@navercorp.com + +# Abstract + +To build a conversational agent that interacts fluently with humans, previous studies blend knowledge or personal profile into the pre-trained language model. However, the model that considers knowledge and persona at the same time is still limited, leading to hallucination and a passive way of using personas. We propose an effective dialogue agent that grounds external knowledge and persona simultaneously. The agent selects the proper knowledge and persona to use for generating the answers with our candidate scoring implemented with a poly-encoder. Then, our model generates the utterance with lesser hallucination and more engagingness utilizing retrieval augmented generation with knowledge-persona enhanced query. We conduct experiments on the persona-knowledge chat and achieve state-of-the-art performance in grounding and generation tasks on the automatic metrics. Moreover, we validate the answers from the models regarding hallucination and engagingness through human evaluation and qualitative results. We show our retriever's effectiveness in extracting relevant documents compared to the other previous retrievers, along with the comparison of multiple candidate scoring methods. Code is available at https://github.com/dlawjddn803/INFO + +# 1 Introduction + +To build an ultimate conversational agent that interacts with humans fluently, previous studies provide generative neural network-based models (Sordoni et al., 2015; Vinyals and Le, 2015). Although the answers generated from those models are plausible, they lack informativeness and engagingness resulting in bland responses compared to humans (Li et al., 2016; Gao et al., + +# Dialogue + +Human: Is it in England? + +Machine: No, it is actually in Scotland where you are going. + +Human: Where in Scotland? + +# Human's Persona + +I will travel through North Ayrshire. + +I am going to Scotland. + +I like history. + +I am interested in architecture. + +I love to garden. + +# Ground Truth Knowledge + +Eglinton Castle was a large Gothic castellated mansion in Kilwinning, North Ayrshire, Scotland.. + +# Predicted Answers + +BARTbase It is in Scotland, which is a place you love. + +BARTLarge It is in Scotland. in Scotland. in + +# Ground Truth Response + +It is in North Ayrshire so you could visit when you travel through. + +Table 1: Example of the generated answers from a typical generative model, i.e., BART. We can find that $\mathrm{BART}_{\text {base }}$ uses different persona sentence which has not appeared human's personal profiles resulting in hallucinated answer. Also, $\mathrm{BART}_{\text {large }}$ generates less engaging answers by making use of the knowledge only to answer the question. Both generated responses are in the situation of hallucination and are less engaging. + +2018). However, for knowledgeable and attractive conversation, people usually provide informative replies by considering the background of the person whom they are talking to. Towards a human-like manner of dialogue, Ghazvininejad et al. (2018) and Dinan et al. (2018) introduce the knowledge-grounded conversation for the knowledgeable and informative responses, whereas Zhang et al. (2018a) suggest the persona-grounded dialogue for the personalized responses to the users. + +To improve the machine's answer with the external knowledge base, one injects the factual knowledge into the parameters of the language model (Raffel et al., 2020; Roberts et al., 2020). Despite the models' capability of utilizing external knowledge implicitly, they produce "hallucinations" in the responses (Marcus, 2020). The hallucination + +in the dialogue involves the situation where the generated output contradicts the reference knowledge. Also, it includes the situation when the generated output cannot be confirmed from the knowledge source (Ji et al., 2022). To mitigate these hallucinated answers, hybrid models employing parametric memory with non-parametric (i.e., retrieval-based) memory are introduced to directly access external memories, leading the source to be inspected and interpreted (Karpukhin et al., 2020; Petroni et al., 2020; Lewis et al., 2020b). + +On the other hand, Zhang et al. (2018a) suggest persona-chat dialogues with the corresponding personal profiles of each interlocutor to avoid general and monotonous answers from the machine. Though See et al. (2019); Liu et al. (2020) show comparable quality in generating personalized conversation, the generated utterances merely confirm each interlocutor's persona resulting in a passive manner of speaking such as "I have four children". In addition, the incoherent topics of the dialogues lead to shallow levels of conversation between the interlocutors. To elaborate on this chit-chat conversation supported by external knowledge, Jang et al. (2022) presents a novel persona-knowledge chat with a generative model that considers persona information and world knowledge altogether. Despite obtaining the knowledge and persona when generating the answers, the generative models' responses still exhibit both hallucination and lesser engagingness as in Table 1. + +In this paper, we propose INFO (Intellectual and Friendly dialOg agents) that responds with external knowledge and persona simultaneously. Owing to the enhanced capturing relevancy between the context and each candidate set, the knowledge selector and persona selector for the grounding task are implemented with the poly-encoder. To alleviate hallucinated responses from the model, we adopt retrieval-augmented generation (RAG) (Lewis et al., 2020b) by utilizing non-parametric memory and parametric generator in addition to the enhanced input query. By injecting predicted sources as input to the retrieved-augmented generator, our model maintains consistency between grounding and generation while training. Therefore, our model generates more knowledgeable and engaging answers in an active manner with less hallucination. + +We show that INFO achieves the highest + +scores on both grounding and generation tasks in empirical experiments. Also, we compare diverse candidate scoring modules including bi-encoder, cross-encoder, and poly-encoder and demonstrate their effect on generation. We additionally conduct experiments to show the effectiveness of the retriever module compared to sparse and dense retrievers. The qualitative results and human evaluation are also presented to validate our model's capability to generate human-like answers. + +Our contributions are as follows: + +- We propose the model that grounds persona information and external knowledge with lesser hallucination and adequate utilization of persona in an active manner simultaneously. +- Our approach suggests that the generated responses from the model are interpretable regarding what the model refers to while generating. +- We show that INFO achieves the SoTA performance in all of the automatic metrics and demonstrate its comparable quality with human evaluation and qualitative analysis. + +# 2 Related Works + +# 2.1 Knowledge Grounded Conversation + +To let the neural network models ground external knowledge and generate informative answers, Ghazvininejad et al. (2018) suggests a data-driven neural conversational agent that provides knowledgeable answers. Also, Dinan et al. (2018) introduces open-domain dialogue where the two speakers are talking with Wikipedia knowledge. To inject the external knowledge into the pre-trained language model efficiently, Raffel et al. (2020); Roberts et al. (2020) success in equipping the knowledge into the parameters and show comparable performance in open-domain question and answering tasks. However, the approach is not capable of expand or revise their inherent knowledge and provides hallucination (Marcus, 2020). To overcome the limitations, Lewis et al. (2020b) combines a pre-trained parametric model and non-parametric memory for the open-domain question and answering to reduce hallucination. Since their non-parametric memory can be updated without extra pre-training, revising knowledge is more efficient. Furthermore, it is found that a retrieval-augmented + +![](images/d13f4e960cc1fabf29e096cb88e3cea46020b10dff3b11f8fe5ebf7e3474aef7.jpg) +Figure 1: Overview of our method. $U$ is the input comprises dialogue history and knowledge snippet, and cand denotes each candidate from the grounding tasks. The grounding score is obtained through the dot product operation with the representation of input context $U_{dial}$ and candidate $a_t$ . The predicted sources convert into the knowledge-persona enhanced query (KPEQ) with dialogue history and KPEQ is fed into the retrieval-augmented generator to generate the responses. + +generator also reduces hallucination in knowledge-grounded conversation as well (Shuster et al., 2021), and a similar approach recently achieves outstanding performance in knowledge-grounded conversation (Paranjape et al., 2021). + +# 2.2 Persona Grounded Conversation + +In order to alleviate bland and general answers with consistent personality, Zhang et al. (2018a) constructs a persona-chat dataset. In the dataset, the two interlocutors chat with the persona profile sentences. Along with this dataset, Zhang et al. (2018a) introduces the model with a profile memory network by considering the dialogue history to perform attention over the persona. They enlarge the persona-chat dataset with Reddit corpus, and pre-trained the model with these dataset. After that, they fine-tune pretrained model on the persona-chat (Mazare et al., 2018). Also, Liu et al. (2020) trains a receiver to reinforce the mutual persona understanding between interlocutors, and Wolf et al. (2019) utilize pre-trained models (Radford et al., 2019) to build personalized dialogue agents. + +# 2.3 Encoders for Sentence Scoring + +There exist diverse encoder structures for sentence scoring. Bi-encoder scores the relevance between sentences by feeding context and candidates into separate encoders. An example of bi-encoders are memory networks (Zhang et al., 2018a), transformer memory networks (Dinan et al., 2018), LSTM (Lowe et al., 2015). Since bi-encoder calculates with cached encoded sentence + +representations, it is relatively fast in computation. However, the bi-encoder has a limitation of capturing mutual information between context and candidates. Cross-encoder, on the other hand, scores by aligning context and candidates in one sequence. A type of cross-encoders is a sequential matching network that is based on deep matching networks (Yang et al., 2018) and gated self-attention (Zhang et al., 2018b). Although using a cross-encoder can achieve rich interaction between the sentences within the encoder, the problem of slow processing still remains. To exploit both benefits of each model, poly-encoder adopts attention mechanism into the bi-encoder architecture and shows satisfactory performances as cross-encoder with fast inference time (Humeau et al., 2019). For the enhanced representation of grounding knowledge and persona, we employ a poly-encoder as a selector for each grounding task. + +# 3 Method + +To generate more knowledgeable and engaging dialogue, we introduce our conversational model that grounds external knowledge and persona information as in Figure 1. We first encode the input with the pre-trained language model, and then choose the proper knowledge and persona from the given candidates for each selector. We employ poly-encoder (Humeau et al., 2019) as knowledge selector and persona selector to exploit its enhanced capability of capturing relevance between candidate set and context (i.e., dialogue history). Then, the predicted persona and knowledge are aligned into one sequence + +to the dialogue history for consistency between grounding and generation. The sequence is defined as a knowledge-persona enhanced query (KPEQ), then it feeds into the retriever-augmented generator (RAG). The generator then extracts the relevant paragraphs to refer from the knowledge index to reduce hallucination. + +# 3.1 Input Construction + +The given dialogue is notated as $\{(u_1^{hm}, u_1^{mc}), \ldots (u_o^{hm}, u_o^{mc})\}$ , where $o$ is the number of rounds. $u^{hm}$ and $u^{mc}$ indicate the utterances of human and machines, respectively. We first take $o$ -th round dialogue history, except for the final machine's reply $u_o^{mc}$ , for the initial input for the model. We define the clue of the dialogue as knowledge snippet $cl_k$ to inform the machine of which topic the user is interested in. The knowledge snippet is the name of the landmark that the user encounters, which is given topic from the dialogue. We then align the dialogue history and knowledge snippet into the one sequence for the model input as $U = \{u_1^{hm}, u_1^{mc}, \ldots u_o^{hm}, cl_k\}$ . + +# 3.2 Model Components + +# 3.2.1 Poly-Encoder Based Candidate Scoring + +For knowledge and persona grounding tasks, we suggest poly-encoder-based candidate scoring to leverage the capability of capturing the semantic similarities between the context input and the candidates. It is employed to select proper sources to be used when generating the utterance. When the context input $U$ comes in, we compute the grounding scores of each candidate utilizing the embeddings of context input and encoded candidates in the poly-encoder. The grounding score is used to select the most suitable source(s) in the knowledge selector and persona selector, which will be introduced in the following Section 3.2.2 and 3.2.3. + +In poly-encoder architecture (Humeau et al., 2019), candidates are fed into the candidate encoder and denoted as $\{a_1,\dots,a_T\}$ where $T$ is the number of candidates in the set. Each candidate embedding $a_{t}$ is the first output of the candidate encoder, which is represented by the transformer model. After encoding candidates, the context input (i.e., dialogue history) is embedded with a separate context encoder. Unlike the candidate encoder, the context encoder embeds the dialogue into multiple vectors through $M$ context codes + +$\{c_1,\ldots c_M\}$ , which are learned for capturing diverse aspects of a given context rather than using one embedding. Each context code is used to extract $U_{dial}^{m}$ by attending over all the previous layer's output as follows. + +$$ +U _ {d i a l} ^ {m} = \sum_ {j} w _ {j} ^ {c _ {m}} h _ {j} \tag {1} +$$ + +Note that the $h_1, \ldots, h_n$ is the output of the pretrained language model and $n$ is the number of tokens in the input. The weights are computed as $(w_1^{c_m}, \ldots, w_n^{c_m}) = \text{softmax}(c_m \cdot h_1, \ldots, c_m \cdot h_n)$ . + +Then, the final attention proceeds between the global features of the input and a given candidate. In other words, the final dialogue feature $U_{dial}$ is obtained by aggregating each dialogue feature $U_{dial}^{m}$ , while gaining richer interactions with context codes as in Equation 2. + +$$ +U _ {d i a l} = \sum_ {m} w _ {m} U _ {d i a l} ^ {m}, \tag {2} +$$ + +where $w_{1},\ldots ,w_{M}$ can be obtained from $\mathrm{softmax}(a_t\cdot U_{dial}^1,\dots,a_t\cdot U_{dial}^M)$ + +The final predicted candidate is chosen based on the highest score that is acquired from the dot product operation as $(U_{dial} \cdot a_t)$ . + +# 3.2.2 Knowledge Selector (KS) + +We build a knowledge selector for the knowledge grounding task, employing poly-encoder-based candidate scoring. When the grounding scores are produced from the candidate scoring module, the label with the highest score is selected as the predicted knowledge. + +The knowledge loss $\mathcal{L}_{KG}$ for the knowledge grounding task is computed with cross-entropy loss (Brier et al., 1950) as in Equation 3. + +$$ +\mathcal {L} _ {K G} = - \sum_ {j} k l _ {j} \cdot \log k \hat {l} _ {j}, \tag {3} +$$ + +$kl_{j}$ is the ground-truth label from the knowledge candidates of the $j$ -th example. + +# 3.2.3 Persona Selector (PS) + +We also implement a persona selector for the persona grounding task. Since multiple personas can be chosen to generate the responses, consideration of one or more persona sentences are needed. Similar to the knowledge selector, we assign the grounding score to each persona + +candidate with the candidate scoring module as in Equation 1 and 2. + +When the scores of each candidate are computed from the candidate scoring module, then the persona level indicator classifies which the number of the persona should be selected with the [CLS] token of the model input $U$ . After predicting the level of persona-engagingness, we pick persona sentences to be grounded according to the number predicted. For example, if the persona level indicator predicts 2, then top-2 persona sentences are chosen in the persona grounding task. The selected persona sentence(s) are marked as 1 otherwise, 0. We use binary cross-entropy loss for persona grounding as in Equation 4. + +$$ +\begin{array}{l} \mathcal {L} _ {P G} = \\ - \sum_ {j} p l _ {j} \cdot \log p \hat {l} _ {j} + (1 - p l _ {j}) \cdot \log (1 - p \hat {l} _ {j}) \tag {4} \\ \end{array} +$$ + +Note that $pl_j$ is the ground-truth label from the knowledge candidates of the $j$ -th example. + +# 3.2.4 Query-Enhanced Generator + +Following the works of Lewis et al. (2020b), we exploit the retrieval augmented generation's capability to reduce hallucination and access the memory directly. For a consistent way of training while solving grounding and generation tasks, we reconstruct the query that feeds into the retriever. When the knowledge and persona are predicted from each selector, we aggregate them with dialogue history into one sequence. Then, the final query is denoted as $\mathrm{KPEQ} = \{U; \hat{P}; \hat{K}\}$ and defined as a knowledge-persona enhanced query. $\hat{P}$ and $\hat{K}$ are predicted persona and knowledge from each candidate set, respectively. + +The retriever $r_{\eta}$ aims to search top-K latent paragraphs with the KPEQ. We utilize a pretrained dense passage retriever (DPR) (Karpukhin et al., 2020) trained on natural question dataset (Kwiatkowski et al., 2019) which has parametric memory and bi-encoder architecture to retrieve a latent document embedding following Lewis et al. (2020b): + +$$ +r _ {\eta} (z | \mathrm {K P E Q}) \propto e x p (\mathbf {d} (z) ^ {\top} \mathbf {q} (\mathrm {K P E Q})), \tag {5} +$$ + +where $\mathbf{d}(\cdot)$ is an embedding from a document encoder and $\mathbf{q}(\cdot)$ is a representation from query encoder, both implemented with $\mathrm{BERT}_{\mathrm{base}}$ . $z$ denotes the list of document. + +With the relevant paragraphs from the retriever, we employ RAG-Token architecture as the generator to borrow its strength of predicting each target token based on top-K different paragraphs. Since RAG-Sequence, which has a different architecture to RAG-Token, uses the same document from the retriever to predict each token as depicted in Equation 6, the result may opt to depend on the retrieved document (Lewis et al., 2020a). The two different versions of RAGs (Lewis et al., 2020b) are as follows: + +$$ +\begin{array}{l} S _ {\mathrm {R S}} (y | x) \approx \\ \sum_ {z \in \operatorname {t o p - k} (p (\cdot | x))} r _ {\eta} (z | x) \prod_ {i} ^ {N} g _ {\theta} \left(y _ {i} | x, z, y _ {1: i - 1}\right) \tag {6} \\ \end{array} +$$ + +$$ +\begin{array}{l} S _ {\mathrm {R T}} (y | x) \approx \\ \prod_ {i} ^ {N} \sum_ {z \in \operatorname {t o p - k} (p (\cdot | x))} r _ {\eta} (z | x) g _ {\theta} \left(y _ {i} | x, z, y _ {1: i - 1}\right), \tag {7} \\ \end{array} +$$ + +where $\mathbf{S}_{RS}$ indicates our method with RAG-Sequence architecture and $\mathbf{S}_{RT}$ denotes ours with the RAG-Token model. $x$ is a token of KPEQ and $y_{i}$ is a single token from the ground truth responses. Also, $z$ is a retrieved paragraph from the retriever and $N$ is the maximum sequence length. + +The $S_{RT}$ generator $g(\cdot)$ marginalizes the loss from different paragraphs when generating answers. In detail, the generator outputs a distribution for the next token for each document before marginalizing as in Equation 7 where $\eta$ denotes the parameter of the retriever, and $\theta$ indicates the parameter of the generator. After that, the generator repeats the process with the following output token. Finally, the $S_{RT}$ aims to generate the next token following an auto-regressive manner with a standard beam search. In other words, the model minimizes the negative marginal log-likelihood for each input/output pair $(\mathrm{KPEQ}_j, y_j)$ . The language model loss is formulated as: + +$$ +\mathcal {L} _ {S} = - \sum_ {j} \log p \left(y _ {j} \mid \mathrm {K P E Q} _ {j}\right) \tag {8} +$$ + +# 3.3 Final Objectives + +We then train the full model in the multi-tasking manner. The full objectives of the model is indicated as Equation 9. + +$$ +\mathcal {L} = \lambda_ {K G} \mathcal {L} _ {K G} + \lambda_ {P G} \mathcal {L} _ {P G} + \lambda_ {S} \mathcal {L} _ {S} \tag {9} +$$ + +
    ModelsGenerationGrounding (Acc.)
    chrF++BLEUR-1R-2R-LBERTScorePersonaKnowledge
    GPT2small28.7311.4336.5819.4432.6288.5667.4469.59
    GPT2medium30.1212.3138.2921.1734.1288.9267.4472.42
    BARTbase29.7711.9936.2419.7332.1388.3567.4572.18
    BARTlarge30.6911.9136.5719.8332.0588.1067.4471.01
    INFO (RS)51.3329.3653.3640.3651.1692.0082.7099.24
    INFO (SRT)53.2931.4658.2642.3553.0692.2980.8799.22
    + +Table 2: Main results on the official validation set. $S_{RS}$ denotes our method with RAG-Sequence architecture and $S_{RT}$ indicates the model with RAG-Token model as generator. The models are evaluated by generation metrics, including $\mathrm{chrF++}$ , BLEU, ROUGE-1 (R-1), ROUGE-2 (R-2), ROUGE-L (R-L), and BERTScore. + +We control the proportion of each task and we set $\lambda_{KG},\lambda_{PG}$ , and $\lambda_{S}$ as 1:1:5 for the experiments, respectively. We find the value of each $\lambda$ with manual search. + +# 4 Experiments + +# 4.1 Experiment Details + +Dataset FoCus (Jang et al., 2022) is the dataset for customized dialogue benchmark, where each conversation is directly grounded with knowledge and persona. The dataset includes knowledge-aware dialogue with personal profiles between humans and machines. There are 12,484 dialogues about 5,152 knowledge sources from Wikipedia and 32,855 persona sentences. To validate the knowledge grounding capability and customized dialogue generation, we evaluate our method with the official FoCus validation set for the effectiveness of experiments since the result from the official test set can be tested only through the leaderboard*. + +Experimental Setup For each candidate scoring module, we implement poly-encoder (Humeau et al., 2019) with BERTlarge, and the number of context codes is 16. For the dialogue generation, we implement our method with Hugging Face (Wolf et al., 2020) and use facebook rag-token-nq as the backbone model. We use the same architecture of retriever and generator from RAG along with the decoding and leverage our knowledge index for non-parametric query-document ranking with FAISS library (Johnson et al., 2019). The knowledge index consists of the paragraphs from the given Wikipedia knowledge entitled with the name of the given landmark. We set learning rate as $6.25\mathrm{e - }6$ with AdamW (Kingma and Ba, 2014) + +for the optimization. The batch size is set as 32, and the number of dialogue history is 1. The whole model was trained for three epochs on RTX A6000 GPU and took 8 hours per one epoch. + +Baselines We implement the baselines from previous study (Jang et al., 2022) and we conduct experiments with GPT-2 (Radford et al., 2019) and BART (Lewis et al., 2020a) as well. For a fair comparison, we demonstrate the results on GPT $2_{small}$ , which has 12 layers, and $\mathrm{BART}_{\mathrm{base}}$ , which has 6 encoders and 6 decoder layers. Also, GPT $2_{medium}$ contains 24 layers of the decoder, and $\mathrm{BART}_{\mathrm{large}}$ possesses 12 layers for each encoder and decoder. + +# 4.2 Automatic Evaluation + +We show the main results on the FoCus dataset with automatic metrics in grounding and generation tasks. The official metrics for the benchmark are $\mathrm{chrF}++$ (Popovic, 2017), BLEU (Papineni et al., 2002), ROUGE-1, ROUGE-2, and ROUGE-L (Lin, 2004). To consider the semantic similarity score for each token between candidate and reference sentences using contextual representation, we additionally adopt BERTscore (Zhang* et al., 2020). For grounding task, we used accuracy for both knowledge and persona grounding, and F1 score for the persona grounding. + +In Table 2, it is found that our method shows substantial improvements in all the metrics from generation to grounding compared to the baselines. Especially, the performances of INFO increase over $18\%$ at least regarding the generation metrics except for BERTScore. Furthermore, our model achieves remarkable success in persona and knowledge accuracy. Unlike the performance in other generation metrics, $S_{RS}$ demonstrates better persona accuracy than $S_{RT}$ . This result might be + +
    ModelGenerationGrounding
    chrF++BLEUR-1R-2R-LBERTScorePersona (Acc.)Persona (F1)Knowledge (Acc.)
    \( S_{RT} \)Bi-encoder51.8329.5156.3540.8051.3791.8688.1038.2099.18
    Cross-encoder49.9027.1853.5738.2549.2991.5287.0935.3299.49
    Poly-encoder53.2931.4658.2642.3553.0692.2980.8739.5699.22
    + +attributed to the architecture of the generator, which is more applicable to sentence classification tasks such as persona grounding. The official test result is also demonstrated in Appendix A, but BERTscore is missing due to the unreleased ground truth. + +# 4.3 Human Evaluation + +We conduct a human evaluation to validate the responses from our model through Amazon Mturk services†. The assessment criteria are fluency, adequacy, provenance, engagingness, and hallucination. In specific, provenance is the level of utilization of the ground truth knowledge into the responses, whereas engagingness means how much the answers are persona-related. Also, hallucination indicates whether the answer contradicts the persona and knowledge or cannot be verified from the source content. We randomly chose 50 dialogues from the official test set, and three workers were allocated to evaluate each dialogue generated by our model and baselines. We asked the workers to rank the answers according to each criterion following Cho and May (2020). Rank is scaled from 1 to 5, and the lower number is mapped to the better quality except for hallucination. The agreement between the annotators is calculated with Fleiss' Kappa coefficient and is 0.4185 indicating fair agreement. The relations between the annotators hardly exist since we collect the results from the Amazon Mturk workers. + +As in Table 4, INFO surpasses BARTbase, BARTlarge, GPT-2small and GPT-2medium in all of the criteria. INFO achieves the highest rank in adequacy, fluency, and provenance and generates a more human-like response than other generative models. Also, the workers ranked our model the lowest when they were asked to rank the responses in the most hallucinated order. Thus, it can be found that INFO generates more engaging and fewer hallucination utterances with respect to the human. The distribution of the rank per each criterion is illustrated in Appendix B. + +Table 3: Performances comparison between the encoding modules for grounding tasks + +
    ModelsAvg. Rank
    Ad. ↓Fl. ↓Prov. ↓Eng. ↓Hall. ↑
    GPT-2small3.573.413.583.462.49
    GPT-2medium3.113.103.043.253.02
    BARTbase3.433.293.473.222.45
    BARTlarge3.313.633.293.442.69
    INFO (Ours)1.571.571.621.634.35
    + +Table 4: Human evaluation. The value in the table is the average rank of the each model's response. The abbreviation Ad. Fl. Prov. Eng. and Hall denote adequacy, fluency, provenance, engagement, and hallucination, respectively. + +# 5 Results and Analysis + +# 5.1 Variants on Candidate Scoring Module + +To validate the poly-encoder as a candidate scoring module, we apply diverse candidate scoring modules, including the bi-encoder and cross-encoder. From the results in Table 3, we can find that the poly-encoder outperforms in the generation task. In the grounding task, $S_{RT}$ with cross-encoder scoring shows improved accuracy on grounding persona and knowledge. The result seems to be $S_{RT}$ with bi-encoder and cross-encoder are better than that with poly-encoder. However, the F1 score of INFO is higher than the two candidate scoring modules implying that low accuracy in persona is due to the tendency of active use on the persona in poly-encoder while the other two models opt to predict not to use persona sentence. The results suggest that the high accuracy of persona not always guarantees the engagingness in the dialogue. + +# 5.2 Comparison on other Retrievers + +We show that INFO is effective in retrieving knowledge compared to other sparse and dense retrievers. We retrieve the knowledge from our knowledge index built with Wikipedia paragraphs. We utilize TF-IDF (Joachims, 1996), and deep passage retrieval (DPR) (Karpukhin et al., 2020). In the case of TF-IDF, we set the sum of query + +and knowledge tokens less than or equal to 512, which is the maximum sequence length of DPR and INFO. We use bert-base-uncased as the tokenizer. For DPR, we extract less than 40 knowledge using TF-IDF due to memory limitations. We first retrieve the five paragraphs related to the query that comprises knowledge snippet, dialogue history, predicted knowledge candidate, and selected persona sentences. In Table 5, we find that the retriever we used outperforms compared to the TF-IDF and DPR in all the metrics, including BERTscore. The results imply that INFO's retriever is suitable for extracting similar paragraphs rather than other retrievers. + +
    ModelchrF++BLEUR-1R-2R-LBERTScore
    TF-IDF19.913.5213.919.9612.4351.54
    DPR20.573.8612.446.5510.2047.48
    INFO26.367.4015.4812.1814.3253.14
    + +# 5.3 Effect of Selectors on Generation + +We measure each selector module's effect on the generation task by changing the query which feds into the retriever on a validation set. The experimental results are shown in Table 6, where $GT_{K}, GT_{P}$ represents ground truth knowledge and persona. Although the query that comprises the ground truth source shows the highest scores, INFO demonstrates comparable results on the generation task. From the result where the performance increase of $\mathrm{INFO} + GT_{P}$ is larger than that of $\mathrm{INFO} + GT_{K}$ about $2.8\%$ p, we can identify that our persona selector still has more space to achieve its maximum level. + +Table 5: Comparison with other retrievers + +
    QuerychrF++BLEUR-1R-2R-LBERTScore
    INFO (RT)53.2931.4658.2642.3553.0692.29
    +GTK53.3531.5658.3142.5553.1892.29
    +GTP56.1934.3961.6145.4656.0192.79
    +GTK+GTP56.4034.6061.8845.6456.1692.84
    + +# 5.4 Qualitative Analysis + +In Table 7, an example from the predicted results is illustrated. In the case of $\mathrm{BART}_{\text{large}}$ , and GPT-2 medium, the responses only reflect the ground + +Table 6: Comparison between the generation performances based on the variants of query with ground truth knowledge and persona. Note that all the performance is evaluated with the official validation set. + +
    Given Landmark
    Finding Nemo Submarine Voyage
    Dialogue
    Human: What area of the park is this ride in?Machine: This ride is located in the Tomorrowlandarea of Disneyland.Human: Has this ride always been about Finding Nemo?
    Human's Persona
    I've never been to California.My favorite cartoon is Finding Nemo.I would like to visit Disneyland.My favorite color is yellow.I enjoy swimming.
    Ground Truth Knowledge (Grounding)
    Based on the characters and settings of the 2003 Disney · Pixar, FindingNemo, it is a re-theming of the classic Submarine Voyage attraction thatoperated from 1959 to 1998.
    Retrieved Knowledge (Generation)
    The original Submarine Voyage was built in 1959 as part of the then newTomorrowland...
    In 2008, Finding Nemo Submarine Voyage received an award foroutstanding achievement from the Themed Entertainment Association.Finding Nemo Submarine Voyage is ... which opened on June 11, 2007....it is a re-theming of the classic Submarine Voyage attraction that operatedfrom 1959 to 1998.On July 15, 2005, two days before the 50th Anniversary of Disneyland,the Finding Nemo Submarine Voyage was officially announced at thenew Turtle Talk...The attraction reuses the eight original 1959 Submarine Voyage throughLiquid Space attraction ...
    Predicted Answers
    BARTbaseYes, that's right. You're a fan of the "Fantasy" film,so I.
    BARTlargeYes, the ride is based on the characters and settingsof the 2003 Disney · Pixar film
    GPT-2smallNo, it was originally a way to show that you loveFinding Nemo.
    GPT-2mediumYes, it has operated from 1959 to 1998.
    INFO (Ours)No, this attraction is actually a re-theme of theclassic submarine voyage attraction that operatedfrom 1959 to 1998. The attraction is based on thecharacters and settings of the 2003 Disney Pixarfilm Finding Nemo, which is your favorite cartoon.
    Ground Truth Response
    No, your favorite cartoon is a new addition to this ride. The currentFinding Nemo ride is a re-theming of the classic "Submarine Voyage"attraction that operated here from 1959 to 1998.
    + +Table 7: Qualitative result. All the predicted results in grounding task are from our model, INFO and it predicts the correct answers in both tasks. We add other baselines' responses for comparative analysis. + +truth knowledge resulting in less engaged answers without any persona-related phrases. Although $\mathrm{BART}_{\mathrm{base}}$ seems to employ a persona sentence in the form of the phrase "You're fan of the Fantasy film", its used sentence does not appear in human's personal profiles. This result also indicates that the utterance is hard to identify its provenance on the knowledge source. Moreover, GPT-2small generates the utterance that contradicts the ground truth knowledge. From the result, we can find that the generated responses from the baselines show hallucinations on both persona and knowledge. Unlike other baselines, our model blends ground truth knowledge and persona sentence into the + +response with less hallucination and engagingness. In addition, the retrieved knowledge source that our model refers to provides interpretability and provenance of the responses to the users. More examples are also depicted in Appendix C. + +# 6 Conclusions + +In this paper, we presented a conversational agent that generates responses grounding the user's persona and external knowledge. We utilized poly-encoder-based candidate scoring for each grounding task. We additionally implement persona level indicator to consider multiple persona selections for delicate persona grounding. With predicted sources, we construct a knowledge-persona enhanced query to retrieve latent paragraphs, and they are used to generate informative and engaging responses by marginalizing loss for each token. We show that our method achieves the state-of-the-art (SoTA) score in both grounding and generation tasks in the persona-knowledge conversation dataset. We also demonstrate that the responses from INFO show less hallucination and more engagingness through human evaluation and qualitative analysis. We also compare the grounding modules and retrievers to show INFO's effectiveness. + +# 7 Limitations + +The proposed model INFO has limitations. Given the INFO's settings, the model cannot deal with real-world application, which means the absence of ground truth knowledge or persona candidates in the grounding task. We also conducted the human evaluation to evaluate the capability of the proposed model's mitigating hallucination in dialogue generation. However, the number of cases is relatively small for evaluating the capability of mitigating hallucination. Finally, INFO demands high GPU computation resources, since it marginalizes loss at the token level. + +We plan to improve the INFO for future work. We will train and evaluate the INFO in open-domain settings as well as real-world settings for the applicable conversational agents. Moreover, we will conduct human evaluations with more cases. Especially, we will enhance the way of quantitative measurement for the model's hallucinated answers. Last but not least, we will improve the generator of INFO with more computationally efficient components. + +# 8 Acknowledgement + +This work was supported by Institute of Information communications Technology Planning Evaluation(IITP) grant funded by the Korea government(MSIT) (No. 2020-0-00368, A Neural-Symbolic Model for Knowledge Acquisition and Inference Techniques), This research was supported by the MSIT(Ministry of Science and ICT), Korea, under the ITRC(Information Technology Research Center) support program(IITP-2022-2018-0-01405) supervised by the IITP(Institute for Information Communications Technology Planning Evaluation), This work was supported by Institute for Information communications Technology Planning Evaluation(IITP) grant funded by the Korea government(MSIT) (No. 2022-0-00369, (Part 4) Development of AI Technology to support Expert Decision-making that can Explain the Reasons/Grounds for Judgment Results based on Expert Knowledge) + +# References + +Glenn W Brier et al. 1950. Verification of forecasts expressed in terms of probability. Monthly weather review, 78(1):1-3. +Hyundong Cho and Jonathan May. 2020. Grounding conversations with improvised dialogues. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2398-2413, Online. Association for Computational Linguistics. +Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2018. Wizard of wikipedia: Knowledge-powered conversational agents. In International Conference on Learning Representations. +Jianfeng Gao, Michel Galley, and Lihong Li. 2018. Neural approaches to conversational ai. ACL 2018, page 2. +Marjan Ghazvininejad, Chris Brockett, Ming-Wei Chang, Bill Dolan, Jianfeng Gao, Wen-tau Yih, and Michel Galley. 2018. A knowledge-grounded neural conversation model. In Thirty-Second AAAI Conference on Artificial Intelligence. +Samuel Humeau, Kurt Shuster, Marie-Anne Lachaux, and Jason Weston. 2019. Poly-encoders: Architectures and pre-training strategies for fast and accurate multi-sentence scoring. In International Conference on Learning Representations. +Yoonna Jang, Jungwoo Lim, Yuna Hur, Dongsuk Oh, Suhyune Son, Yeonsoo Lee, Donghoon Shin, + +Seungryong Kim, and Heuseok Lim. 2022. Call for customized conversation: Customized conversation grounding persona and knowledge. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 10803-10812. +Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Yejin Bang, Andrea Madotto, and Pascale Fung. 2022. Survey of hallucination in natural language generation. arXiv preprint arXiv:2202.03629. +Thorsten Joachims. 1996. A probabilistic analysis of the rocchio algorithm with tfidf for text categorization. Technical report, Carnegie-mellon univ pittsburgh pa dept of computer science. +Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2019. Billion-scale similarity search with GPUs. IEEE Transactions on Big Data, 7(3):535-547. +Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769-6781. +Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. +Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. 2019. Natural questions: A benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:452-466. +Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020a. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871-7880. +Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kuttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al. 2020b. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in Neural Information Processing Systems, 33:9459-9474. +Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and William B Dolan. 2016. A diversity-promoting objective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110-119. + +Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics. +Qian Liu, Yihong Chen, Bei Chen, Jian-Guang Lou, Zixuan Chen, Bin Zhou, and Dongmei Zhang. 2020. You impress me: Dialogue generation via mutual persona perception. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics. +Ryan Lowe, Nissan Pow, Iulian Vlad Serban, and Joelle Pineau. 2015. The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. In Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 285-294. +Gary Marcus. 2020. The next decade in ai: four steps towards robust artificial intelligence. arXiv preprint arXiv:2002.06177. +Pierre-Emmanuel Mazare, Samuel Humeau, Martin Raison, and Antoine Bordes. 2018. Training millions of personalized dialogue agents. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2775-2779. +Kishore Papineni, Salim Roukos, Todd Ward, and Wei jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. pages 311-318. +Ashwin Paranjape, Omar Khattab, Christopher Potts, Matei Zaharia, and Christopher D Manning. 2021. Hindsight: Posterior-guided training of retrievers for improved open-ended generation. In International Conference on Learning Representations. +Fabio Petroni, Patrick Lewis, Aleksandra Piktus, Tim Rocktäschel, Yuxiang Wu, Alexander H Miller, and Sebastian Riedel. 2020. How context affects language models' factual predictions. In *Automated Knowledge Base Construction*. +Maja Popovic. 2017. *chrF++: words helping character n-grams*. In Proceedings of the Second Conference on Machine Translation, pages 612–618, Copenhagen, Denmark. Association for Computational Linguistics. +Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9. +Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21:1-67. + +Adam Roberts, Colin Raffel, and Noam Shazeer. 2020. How much knowledge can you pack into the parameters of a language model? In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5418-5426. +Abigail See, Stephen Roller, Douwe Kiela, and Jason Weston. 2019. What makes a good conversation? how controllable attributes affect human judgments. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1702-1723. +Kurt Shuster, Spencer Poff, Moya Chen, Douwe Kiela, and Jason Weston. 2021. Retrieval augmentation reduces hallucination in conversation. In *Findings of the Association for Computational Linguistics: EMNLP* 2021, pages 3784-3803. +Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and William B Dolan. 2015. A neural network approach to context-sensitive generation of conversational responses. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 196-205. +Oriol Vinyals and Quoc V Le. 2015. A neural conversational model. arXiv preprint arXiv:1506.05869. +Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierrick Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics. +Thomas Wolf, Victor Sanh, Julien Chaumont, and Clement Delangue. 2019. Transfertransfo: A transfer learning approach for neural network based conversational agents. arXiv preprint arXiv:1901.08149. +Liu Yang, Minghui Qiu, Chen Qu, Jiafeng Guo, Yongfeng Zhang, W Bruce Croft, Jun Huang, and Haiqing Chen. 2018. Response ranking with deep matching networks and external knowledge in information-seeking conversation systems. In The 41st international acm SIGIR conference on research & development in information retrieval, pages 245-254. +Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018a. + +Personalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2204-2213. +Tianyi Zhang*, Varsha Kishore*, Felix Wu*, Kilian Q. Weinberger, and Yoav Artzi. 2020. *Bertscore: Evaluating text generation with bert*. In International Conference on Learning Representations. +Zhuosheng Zhang, Jiangtong Li, Pengfei Zhu, Hai Zhao, and Gongshen Liu. 2018b. Modeling multi-turn conversation with deep utterance aggregation. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3740-3752. + +# A Automatic Evaluation on Official Test Set + +
    ModelsGenerationGrounding (Acc.)
    chrF++BLEUR-1R-2R-LPersonaKnowledge
    GPT2small28.8311.6036.2819.5632.4267.8370.95
    GPT2medium30.3412.5838.3521.1634.3467.6472.46
    BARTbase29.8012.1536.2619.7332.0667.6672.02
    BARTlarge30.6311.8636.3619.4231.7367.6270.53
    INFO (RS)52.8129.4156.3740.4151.1682.7498.88
    INFO (RT)54.6132.3358.2742.3953.0980.8399.10
    + +Table 8: Main results on the official test set. RT indicates the model with RAG-Token model as generator. The models are evaluated by generation metrics, including $\mathrm{chrF}++$ , BLEU, ROUGE-1 (R-1), ROUGE-2 (R-2) and ROUGE-L (R-L). The accuracy for persona grounding task and knowledge grounding task are also noted. Since BERTscore is not the official generation metric, we cannot evaluate the result on the metric as the ground truth of the test is not yet disclosed. + +# B Human Evaluation Distribution on Each Criteria + +![](images/87954346c19ab943cb94895ce82a30ba339241a4ae089f5257526075c49f77e9.jpg) +(a) Adequacy + +![](images/cee6e6f2758550e1e480fee0b95d3f48a9e1f8b188cef2d0d26b485e4db72153.jpg) +(b) Fluency +Figure 2: The distribution of the rank on the adequacy and fluency criteria. Guide A to E indicates INFO, BARTbase, BARTlarge, GPT-2small, and GPT-2medium, in the order. + +![](images/bf72818576e30e6dbdb7cbe726c38f0c691fd96752cdfb68d18cef69730adace.jpg) +(a) Provenance + +![](images/93eb808181430797103688246861303bc19471b3b6c798c24e51db692e330d19.jpg) +(b) Engagingness +Figure 3: The distribution of the rank on the provenance and engagingness criteria. Guide A to E indicates INFO, BARTbase, BARTlarge, GPT-2small, and GPT-2medium, in the order. + +![](images/4167b0483bc35e4e5a94cdb0248ca4e0c70f9a4346f4db297febb5e900bff26f.jpg) +Figure 4: The distribution of the rank on the less hallucination criterion. Note that the highest rank (1) means the most hallucinated. Guide A to E indicates INFO, BARTbase, BARTlarge, GPT-2small, and GPT-2medium, in the order. + +# C Qualitative Results + +# Given Landmark + +Nocton Hall + +# Dialogue + +Human: I know this place, but I don't remember the name of this place. + +# Human's Persona + +I have a hall in my house. +I have worked with military hospital. +I would like to go to England. +I have heard about Frederick John Robinson. +I have respect to American Officers. + +# Ground Truth Knowledge (Grounding) + +Nocton Hall is a historic Grade II listed building in the village of Nocton, in Lincolnshire, England. + +# Retrieved Knowledge (Generation) + +"Hogesdon" is first recorded in the Domesday Book, meaning an Anglo-Saxon farm (or "fortified enclosure") belonging to Hoch, or Hocq... + +By 1677, the church was in such poor structural condition that the Dean of Chichester convened a consistory court with three parishioners and asked them why repairs had not been carried out... + +he south transept had historically served as the mausoleum for the Cavendish family, who resided at nearby Latimer... + +Following the diminished influence of the ancient Priory, predecessor of the two parishes of St Bartholomew, disputes began to arise over rights to tithes and taxes payable by lay residents who claimed allegiance with the nearby and ancientsly associated parish of St Botolph Aldersgate - an unintended consequence and legacy of King Henry VIII's religious reforms. + +a wall-mounted stone pulpit, a stone altar, a series of tall, pointed windows high in the walls, an unadorned stone font and a short wooden steeple-little more than an extended belfry-extending from the nave roof... + +Predicted Answers + +
    BARTbaseThis is the Nocton Hall, a historic Grade II listed building in the village of Nocton,
    BARTlargeNocton Hall is a historic Grade II listed building in the village of Nocton, in Lincoln
    GPT-2smallThis is Nocton Hall, a historic Grade II listed building in the village of Nocton, in
    GPT-2mediumThis is the Nocton Hall, a historic Grade II listed building in the village of Nocton,
    INFO (Ours)This will be a nice vision for you. Nocton Hall is a historic Grade II listed building in England that you would like to visit.
    + +# Ground Truth Response + +You can see Nocton Hall in the village of Nocton, in Lincolnshire of England, the country you want to visit. + +# Given Landmark + +Maiden Castle, Dorset + +# Dialogue + +Human:Wow, this is amazing! What is this? + +Machine: It is Maiden Castle in Dorset. I thought you would like it since you are interested in historic forts. + +Human: Who owns the site today? + +# Human's Persona + +I like Britain. + +I have been to Dorset. + +I am interested in historic forts. + +I hope to work for English Heritage. + +I would like to visit an old fort. + +# Ground Truth Knowledge (Grounding) + +Today the site is protected as a Scheduled Ancient Monument and is maintained by English Heritage. + +# Retrieved Knowledge (Generation) + +Portland Castle is an artillery fort constructed by Henry VIII on the Isle of Portland, Dorset, between 1539 and 1541... + +this version of events, or even that the hill fort was attacked by the Romans... + +Between 1985 and 1986 further excavations under Niall Sharples were prompted by the hill fort's deteriorating condition, partly caused by the large number of visitors to the site... + +a Tudor rose and the initials E.R. (Elizabeth Regina), has been preserved and can be seen in the inner bailey of the castle mounted on a replica carriage... + +Constructed on a territorial boundary in about 600 BC, the first hill fort at Maiden Castle was a 6.4-hectare (16-acre) area surrounded by a single ditch... + +Predicted Answers + +
    BARTbaseThe site is maintained by English Heritage, the country you are from.
    BARTlargeToday the site is owned by English Heritage....
    GPT-2smallToday the site is protected as a Scheduled Ancient Monument and is maintained by English Heritage.
    GPT-2mediumToday the site is maintained by English Heritage.
    INFO (Ours)Today the site is owned by English Heritage. You may wish to research this further since you hope to work for English Heritage.
    + +# Ground Truth Response + +It is owned by English Heritage; a company you hope to work for. + +Table 9: Qualitative results. All the predicted results in grounding task are from our model, INFO and it predicts the correct answers in both tasks. We add other baselines' responses for comparative analysis. \ No newline at end of file diff --git a/youtrulyunderstandwhatineedintellectualandfriendlydialogagentsgroundingpersonaandknowledge/images.zip b/youtrulyunderstandwhatineedintellectualandfriendlydialogagentsgroundingpersonaandknowledge/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..250806da2ab34269a980993dc08f53aca066fa07 --- /dev/null +++ b/youtrulyunderstandwhatineedintellectualandfriendlydialogagentsgroundingpersonaandknowledge/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a7f08795cb99f74daca5ddc4b1c5fca1d5b086794947c5a8bd1a9ad3db904907 +size 632536 diff --git a/youtrulyunderstandwhatineedintellectualandfriendlydialogagentsgroundingpersonaandknowledge/layout.json b/youtrulyunderstandwhatineedintellectualandfriendlydialogagentsgroundingpersonaandknowledge/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..a1f4101d9be8ea92fd2f3ad5c1e24e1c1e5ac3e8 --- /dev/null +++ b/youtrulyunderstandwhatineedintellectualandfriendlydialogagentsgroundingpersonaandknowledge/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7bb78d90de198d55e449baf02e95286fb9c476d2ae932c2b3728540b05dccd9a +size 463491 diff --git a/zeropromptscalingpromptbasedpretrainingto1000tasksimproveszeroshotgeneralization/fbb28ac3-1f32-4868-b845-ebe6ce49cb62_content_list.json b/zeropromptscalingpromptbasedpretrainingto1000tasksimproveszeroshotgeneralization/fbb28ac3-1f32-4868-b845-ebe6ce49cb62_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..85a3781bde3b0a5420cd026eb58a310dbdbdfaa5 --- /dev/null +++ b/zeropromptscalingpromptbasedpretrainingto1000tasksimproveszeroshotgeneralization/fbb28ac3-1f32-4868-b845-ebe6ce49cb62_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3cc7c9440446545a134ddc0064a6ec9f7b14706188150ec2978312691c31a8e5 +size 111778 diff --git a/zeropromptscalingpromptbasedpretrainingto1000tasksimproveszeroshotgeneralization/fbb28ac3-1f32-4868-b845-ebe6ce49cb62_model.json b/zeropromptscalingpromptbasedpretrainingto1000tasksimproveszeroshotgeneralization/fbb28ac3-1f32-4868-b845-ebe6ce49cb62_model.json new file mode 100644 index 0000000000000000000000000000000000000000..0e230d015715ccc37ae5cf14565d67bcacbc3e14 --- /dev/null +++ b/zeropromptscalingpromptbasedpretrainingto1000tasksimproveszeroshotgeneralization/fbb28ac3-1f32-4868-b845-ebe6ce49cb62_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7d715324b66b5d8c007ea07a00edf74d9623158fee101fc1f68f4ac891801245 +size 132151 diff --git a/zeropromptscalingpromptbasedpretrainingto1000tasksimproveszeroshotgeneralization/fbb28ac3-1f32-4868-b845-ebe6ce49cb62_origin.pdf b/zeropromptscalingpromptbasedpretrainingto1000tasksimproveszeroshotgeneralization/fbb28ac3-1f32-4868-b845-ebe6ce49cb62_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..9cf669e2f9fd16755482544d078445e2ef8855c6 --- /dev/null +++ b/zeropromptscalingpromptbasedpretrainingto1000tasksimproveszeroshotgeneralization/fbb28ac3-1f32-4868-b845-ebe6ce49cb62_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:89164decc33edfa7b26f9f5fe6e38ca195528e4fdd5c8dd13ddb118d2ca0256d +size 887788 diff --git a/zeropromptscalingpromptbasedpretrainingto1000tasksimproveszeroshotgeneralization/full.md b/zeropromptscalingpromptbasedpretrainingto1000tasksimproveszeroshotgeneralization/full.md new file mode 100644 index 0000000000000000000000000000000000000000..71b894ec5eefb4a61929c3649cb96a460427e310 --- /dev/null +++ b/zeropromptscalingpromptbasedpretrainingto1000tasksimproveszeroshotgeneralization/full.md @@ -0,0 +1,461 @@ +# ZeroPrompt: Scaling Prompt-Based Pretraining to 1,000 Tasks Improves Zero-shot Generalization + +Hanwei Xu*, Yujun Chen*, Yulun Du*, Nan Shao, Yanggang Wang, Haiyu Li, Zhilin Yang† Recurrent AI + +{xuhanwei, chenyujun, duyulun, kimi_yang}@rcrai.com + +# Abstract + +We propose a multitask pretraining approach ZeroPrompt for zero-shot generalization, focusing on task scaling and zero-shot prompting. While previous models are trained on only a few dozen tasks, we scale to 1,000 tasks for the first time using real-world data. This leads to a crucial discovery that task scaling can be an efficient alternative to model scaling; i.e., the model size has less impact on performance with an extremely large number of tasks. Our results show that on the datasets we consider, task scaling can improve training efficiency by 30 times in FLOPs. Empirically, ZeroPrompt substantially improves both the efficiency and the performance of zero-shot learning across a variety of academic and production datasets. + +# 1 Introduction + +Recent progress like GPT-3 (Brown et al., 2020) demonstrates the possibility of prompting on larger-scale models for zero-shot learning, but the performance of zero-shot generalization still falls short on many tasks compared to fully-supervised finetuning. Further, other works proposed to include a set of supervised tasks into pretraining (Zhong et al., 2021; Wei et al., 2021; Sanh et al., 2021), and prompts are often used in the framework to unify the tasks. Zhong et al. (2021) converted different datasets into a unified "yes/no" question answering format with label descriptions. FLAN (Wei et al., 2021) extended the scope by considering more task types and a larger model. T0 (Sanh et al., 2021) collected a large set of diverse prompts for each task to further enhance performance. + +Despite the effects of model scaling and prompts scaling (Wei et al., 2021; Sanh et al., 2021) have been explored, only dozens of training tasks are + +![](images/7be732e9d212eaf40a496dc72b81edbee92f4a245469bfa8f13de94304edabcc.jpg) +Figure 1: Task scaling vs model scaling. The horizontal axis is the number of training tasks, and the vertical axis is the zero-shot performance on unseen tasks. RoBERTa-Large was finetuned in a fully-supervised manner, while Pangu Alpha, CPM-2 and our ZeroPrompt were zero-shot prompted. + +exploited in these works. It is still not clear how scaling the number of training tasks to hundreds even thousands of tasks affects the performance of multitask pretraining. We hypothesize that task scaling plays an important role in training generalizable zero-shot systems and explore the limits of task scaling using 1,000 tasks. Interestingly, our empirical study reveals that task scaling can be an efficient alternative to model scaling, as shown in Figure 1. With an extremely large number of training tasks, the model size has less impact on performance. A 0.4B model can achieve comparable zero-shot performance to that of a 12B model, improving training efficiency by 30 times in terms of FLOPs and the serving efficiency as well. + +Our contributions can be summarized as follows. + +- We scale the number of tasks to 1,000 in multitask pretraining for the first time. Our study reveals a crucial finding that on the datasets we consider, task scaling is an efficient alter + +native to model scaling. + +- Our experiments demonstrate that task scaling improves both the efficiency and the performance of zero-shot learning. + +# 2 Related Work + +Pretrained language models, like BERT (Devlin et al., 2019), XLNet (Yang et al., 2019), T5 (Raffel et al., 2020) and GPTs (Brown et al., 2020; Radford et al., 2018), have achieved strong performance on various NLP tasks. In some cases, pretrained models can perform well with only a few training samples (Liu et al., 2021; Schick and Schütze, 2021), or even without any training sample (Shen et al., 2021; Sanh et al., 2021). + +It has been shown that augmenting unsupervised pretraining with supervised data can significantly improve task performance during finetuning (Chen et al., 2020; Gururangan et al., 2020). Some recent studies followed this idea and obtained improved few-shot or zero-shot generalization in the same manner. For instance, Mishra et al. (Mishra et al., 2021) built a dataset with task instructions, and CROSSFIT (Ye et al., 2021) introduced a repository of few-shot text-to-text tasks. FLAN (Wei et al., 2021) and T0 (Sanh et al., 2021) applied instruction-tuning of many tasks with 137B and 11B parameters, respectively. ExT5 (Aribandi et al., 2021) applies multitask pretraining as well, but it focuses on multitask cotraining transfer instead of zero-shot generalization. Our ZeroPrompt utilizes labeled data in the pretraining phase, and we aim at studying the task scaling law of zero-shot generalization by adopting 1,000 real-world tasks. + +# 3 ZeroPrompt + +We follow the same framework of multitask zero-shot learning in (Wei et al., 2021; Sanh et al., 2021), where models are pretrained on a variety of tasks and then tested on held-out unseen tasks. + +# 3.1 Datasets for Scaling to 1,000+ Tasks + +We collected 80 public Chinese NLP tasks and further acquired over 1,000 real-world datasets from our production systems to investigate the task number scaling law. The number of tasks in each task type is listed in Table 1, where we define task types following previous work and intuitive knowledge. The task taxonomy of the production datasets is presented in Appendix A.1, consisting of 6 task types from 10 different domains. + +
    Task type# of Tasks
    Sentiment Analysis (SENTI)17 (4,13)
    News Classification (NEWS)9 (4,5)
    Intent Classification (INTENT)4 (1,3)
    Natural Language Inference. (NLI)2 (1,1)
    Sentence Similarity. (STS)13 (3,10)
    Paraphrase (PARA)1 (0,1)
    Question Answer Matching. (QAM)1 (0,1)
    Machine Reading Comprehension (MRC)10 (5,5)
    Name Entity Recognition (NER)9 (3,6)
    Summarization (SUMM)9 (3,6)
    Keywords (KEYS)3 (0,3)
    Winograd Schema Challenge (WSC)1 (0,1)
    App Classification (APP)1 (0,1)
    Production tasks (Objection)110 (85,25)
    Production tasks (Profile)345 (268,77)
    Production tasks (Execution)310 (240,70)
    Production tasks (Mention)125 (97,28)
    Production tasks (Violation)90 (70,20)
    Production tasks (Acception)50 (38,12)
    In total1110 (824,286)
    + +Table 1: The number of tasks for each task type. Numbers in brackets stand for the number of tasks for training and testing, respectively. e.g. SENTI has 4 tasks for training and 13 for testing. + +We split the public datasets and the production datasets into training tasks and testing tasks, as shown in Table 1. Different from FLAN (Sanh et al., 2021) or T0 (Wei et al., 2021), our test set contains a more diverse set of task clusters. Detailed train/test splits can be found in Table 8. To simulate real-world NLP production systems at scale, where the costs for data labeling are expensive, we sample 128 examples per class for each classification task and 256 examples for each generation task to build the training set3. + +# 3.2 Prompt Design + +Although large-scale pretrained models with prompting show promising results on zero-shot generalization to unseen tasks without any labeled data, prompt design is of vital importance to their performance. We applies both the hard prompt, which is composed of label candidates and task descriptions, and the soft prompt at the multitask pretraining stage, details of prompt design can be found in Appendix A.4. + +# 4 Experiments + +# 4.1 Experiment Setups + +We compare ZeroPrompt with state-of-the-art large-scale Chinese pretrained models, Pangu- $\alpha$ (13B + +
    task typetaskCPM-2Zero-ShotPangu-αZero-ShotT5Zero-ShotRoBERTaFinetuningZeroPromptZero-ShotT5Finetuning
    SENTIonline-shopping_10cats80.6061.9971.8895.30(0.42)95.90(0.24)96.94(0.26)
    nlpcc2014_task268.5356.2260.0672.09(0.80)80.49(0.80)80.67(0.21)
    SMP2019_ECISA29.0440.4131.2169.45(1.65)38.46(0.33)74.15(0.30)
    NEWSCCFBDCI202049.5738.0927.4890.73(0.58)80.50(1.68)96.53(0.41)
    INTENTcatslu_traindev62.6346.6511.2791.09(2.33)90.48(0.78)94.42(0.66)
    NLIocnli_public33.7638.5830.5154.70(0.53)46.16(1.87)58.15(1.61)
    STSCBLUE-CHIP-STS44.1556.4044.9480.28(1.08)77.90(0.59)82.45(2.07)
    sohu-sts-B-ss33.5054.9443.4689.71(0.68)79.85(1.03)89.85(0.86)
    QAMnlpcc2016-dbqa49.9056.0851.6956.31(1.51)62.61(3.64)76.76(1.95)
    PARAPAWS-X48.0853.0648.0853.51(0.53)54.90(0.37)59.04(0.51)
    MRCcmrc2018_public8.5111.615.94-35.50(0.73)61.00(0.80)
    NERmsra_ner3.119.81*21.44-58.17(4.40)65.37(2.65)
    CMeEE1.189.44*6.77-24.84(0.94)29.34(2.84)
    SUMMEDU_SUMM1.0510.022.21-14.80(3.15)16.97(2.11)
    KEYSCOTE-MFW1.294.917.05-50.34(9.01)79.35(1.08)
    WSCcluewsc2020_public57.7444.9344.0871.99(3.32)47.98(4.18)72.81(2.19)
    APPiflytek_public4.777.851.6950.34(0.61)26.14(1.02)53.33(1.05)
    ProductionReturn Commitment36.2851.8353.2896.16(0.21)95.53(0.24)96.78(0.62)
    Heating Supply44.8931.6144.5797.48(0.30)99.22(0.35)98.91(0.59)
    Return Amount53.2646.0955.9090.71(0.33)89.48(0.56)90.86(0.47)
    Registration Discount55.0950.3456.2588.68(0.40)88.48(0.51)89.88(0.65)
    Operation Guidance57.9747.7154.5290.78(0.35)78.24(1.41)92.80(0.84)
    Promise for Refunding46.8049.3548.5793.71(0.24)94.28(0.56)91.40(1.13)
    Households Heating Plant63.3769.6648.7196.59(0.47)98.22(0.52)97.39(0.59)
    Refunding Amount48.4852.5849.6783.78(0.52)88.03(0.83)83.74(1.67)
    Cost Abatement43.1848.1351.5180.30(0.92)81.88(0.22)81.40(1.02)
    WeChat Operation45.4551.3747.7982.28(0.59)78.25(0.26)83.53(1.59)
    AVG39.7140.7337.80-68.76(1.48)77.55(1.14)
    AVG excl. GEN48.0547.9044.4280.73(0.85)76.04(1.02)83.72(0.94)
    + +Table 2: Main results of ZeroPrompt (1.5B) and other zero-shot/binetuning baselines. The numbers in brackets are the standard deviations of results with 5 different random seeds. -: We do not finetune RoBERTa on generation tasks because it is an encoder-only model. *: Only part of the test set is sampled for evaluation due to the computation burden. Blue numbers indicate the cases where ZeroPrompt scores better than finetuned RoBERTa and bold numbers indicate the cases where ZeroPrompt achieves the best zero-shot performance. + +decoder) (Zeng et al., 2021), CPM-2 (11B encoder-decoder) (Zhang et al., 2021), and the finetuned RoBERTa-large model (Liu et al., 2019). All finetuned baselines were trained one task at a time. We use a encoder-decoder model and apply both unsupervised pretraining and multitask prompted supervised pretraining. Training details of ZeroPrompt can be found in Appendix A.3. + +# 4.2 Main Results + +# 4.2.1 Power of Task Scaling + +To study the law of task scaling, we trained ZeroPrompt on a mixture of public data and production data, and increased the number of production training tasks from 20 to 800. Zero-shot performance + +on unseen production test tasks are presented in Figure 1. Larger models have much better zero-shot performance with a limited number of training tasks. However, the performance gains from larger models decrease when more training tasks are added. Generally, if we scale the number of training tasks, small models can still achieve impressive zero-shot performance, substantially improving training efficiency by 30 times in FLOPs (0.4B vs 12B) as well as the serving efficiency. + +# 4.2.2 Comparison with Other Baselines + +Results on the reserved testing tasks are shown in Table 2, in the zero-shot setting, ZeroPrompt significantly improves the performance of T5 from 37.80 to 68.76 with a boost of 30.96 points, outper + +![](images/c1698be36b401431f313e21baf86ca23ceb6e15c01fc4f37fd797561a9d47f1e.jpg) +Figure 2: Zero-shot performance on cross-task-type tasks with different number of training tasks. + +
    Model size100 tasks 128-shot80 tasks 1280-shot800 tasks 128-shot
    0.4B70.582.587.3
    1.5B84.086.289.2
    12B84.888.789.4
    + +forming previous PTMs, CPM-2 and Pangu- $\alpha$ , by a large margin of 28 points. Notably, ZeroPrompt is comparable to or even better than a finetuned RoBERTa-large model on some academic and production datasets. Compared to the overall score of the finetuned RoBERTa, ZeroPrompt is only 4.7 points short. This is quite ecstatic considering that ZeroPrompt did not use any labeled data for tuning. + +# 4.3 Discussions + +# 4.3.1 Task Scaling vs Sample Scaling + +While task scaling by definition also increases the number of training samples, we also decouple the effects of task scaling and sample scaling in Table 3. The numbers of total samples are the same for "80 tasks with 1280 shots" and "800 tasks with 128 shots", but the latter shows considerably better performance—4.8 and 3.0 points improvement for the 0.4B model and the 1.5B model, respectively. + +# 4.3.2 Unsupervised Data vs Supervised Data + +Table 3: Task scaling vs sample scaling. + +
    Model0.4B1.5B12B
    LM loss1.91.71.5
    Sup loss0.190.170.19
    + +Table 4: Language modeling (LM) and supervised (Sup) validation loss of models with different sizes. + +Zero-shot performance is attributed to both supervised tasks and the LM task. As we increase the number of supervised tasks, they outweigh the + +![](images/74bdcd97366740c3d97e857604a7eb0fe3265979bab86be6d2092d90bfe300bc.jpg) +Figure 3: Zero-shot performance of 1.5B model on public datasets with different number of production training tasks. + +LM task. Meanwhile, these supervised tasks have much less data to fit than the LM task, which makes smaller models viable choices. Table 4 shows that smaller models have similar losses on supervised tasks but higher losses on LM, compared to larger models. This explains why task scaling can be an alternative to model scaling. + +# 4.3.3 Effect of Task Distribution + +To validate the zero-shot performance on cross-task-type tasks, we select production tasks from two task types for testing and the rest for training, as presented in Figure 2. It can be seen that task scaling still leads to significant improvement of zero-shot performance on cross-task-type tasks. On the other hand, Figure 3 shows the zero-shot performance on public datasets. For some tasks like INTENT, the scaling of production tasks is helpful, but the result could be different for other tasks like SENTI. The average performance of all public datasets does not increase monotonically with more training tasks. We suppose the reason is that the task distribution of production data is different from that of public tasks. Therefore, only part of public tasks benefit from the scaling of production training tasks. We also study the effect of cross task type transfer on public tasks, the results can be found in Appendix A.6. + +# 5 Conclusions + +In this paper, we propose ZeroPrompt, a multi-task prompted pretraining method that significantly improves the zero-shot generalization ability of language models. In our experiments, we collect over 1,000 real-world production tasks to study the task scaling law. We find that on our considered datasets, the zero-shot performance gap between + +small and large models becomes less significant when having more training tasks. As a result, task scaling can substantially improve training and serving efficiency. + +# 6 Limitations + +Our results regarding the effect of task scaling on zero-shot performance still have a few limitations. Specifically, We control our study by only increasing the number of tasks collected from our production system, and they might only represent a subset of all the NLP problems. In addition, for different testing tasks in the public datasets, the zero-shot performance might not increase with the scaling of production training tasks. Therefore, the conclusion that task scaling can significantly boost zero-shot performance is limited to the case where training and test tasks share some similarity in distribution, but not a general conclusion for arbitrary distributions. It also remains an open problem as how to quantitatively characterize the distribution similarity between training and test tasks. We hope our results could encourage future work on addressing these limitations to further explore the potential of zero-shot learning. + +# References + +Vamsi Aribandi, Yi Tay, Tal Schuster, Jinfeng Rao, Huaixiu Steven Zheng, Sanket Vaibhav Mehta, Honglei Zhuang, Vinh Q. Tran, Dara Bahri, Jianmo Ni, Jai Gupta, Kai Hui, Sebastian Ruder, and Donald Metzler. 2021. Ext5: Towards extreme multi-task scaling for transfer learning. +Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877-1901. Curran Associates, Inc. +Ting Chen, Simon Kornblith, Kevin Swersky, Mohammad Norouzi, and Geoffrey E Hinton. 2020. Big self-supervised models are strong semi-supervised learners. Advances in Neural Information Processing Systems, 33:22243-22255. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of + +deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. +Jingfei Du, Edouard Grave, Belize Gunel, Vishrav Chaudhary, Onur Celebi, Michael Auli, Veselin Stoyanov, and Alexis Conneau. 2021. Self-training improves pre-training for natural language understanding. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5408-5418, Online. Association for Computational Linguistics. +Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. SimCSE: Simple contrastive learning of sentence embeddings. In Empirical Methods in Natural Language Processing (EMNLP). +Suchin Gururangan, Ana Marasovic, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8342-8360. +Dong-Hyun Lee et al. 2013. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In Workshop on challenges in representation learning, ICML, volume 3, page 896. +Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, and Jie Tang. 2021. Gpt understands, too. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. +Swaroop Mishra, Daniel Khashabi, Chitta Baral, and Hannaneh Hajishirzi. 2021. Cross-task generalization via natural language crowdsourcing instructions. +Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2018. Language models are unsupervised multitask learners. +Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1-67. +Victor Sanh, Albert Webson, Colin Raffel, Stephen H. Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, + +Taewoon Kim, Gunjan Chhablani, Nihal Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Stella Biderman, Leo Gao, Tali Bers, Thomas Wolf, and Alexander M. Rush. 2021. Multitask prompted training enables zero-shot task generalization. +Timo Schick and Hinrich Schütze. 2021. Exploiting cloze-questions for few-shot text classification and natural language inference. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 255–269, Online. Association for Computational Linguistics. +Feihong Shen, Jun Liu, and Ping Hu. 2021. Conter-factual generative zero-shot semantic segmentation. ArXiv, abs/2106.06360. +Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V. Le. 2021. Finetuned language models are zero-shot learners. +Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems, 32. +Qinyuan Ye, Bill Yuchen Lin, and Xiang Ren. 2021. Crossfit: A few-shot learning challenge for cross-task generalization in nlp. +Wei Zeng, Xiaozhe Ren, Teng Su, Hui Wang, Yi Liao, Zhiwei Wang, Xin Jiang, ZhenZhang Yang, Kaisheng Wang, Xiaoda Zhang, Chen Li, Ziyan Gong, Yifan Yao, Xinjing Huang, Jun Wang, Jianfeng Yu, Qi Guo, Yue Yu, Yan Zhang, Jin Wang, Hengtao Tao, Dasen Yan, Zexuan Yi, Fang Peng, Fangqing Jiang, Han Zhang, Lingfeng Deng, Yehong Zhang, Zhe Lin, Chao Zhang, Shaojie Zhang, Mingyue Guo, Shanzhi Gu, Gaojun Fan, Yaowei Wang, Xuefeng Jin, Qun Liu, and Yonghong Tian. 2021. Panguα: Large-scale autoregressive pretrained chinese language models with auto-parallel computation. +Zhengyan Zhang, Yuxian Gu, Xu Han, Shengqi Chen, Chaojun Xiao, Zhenbo Sun, Yuan Yao, Fanchao Qi, Jian Guan, Pei Ke, Yanzheng Cai, Guoyang Zeng, Zhixing Tan, Zhiyuan Liu, Minlie Huang, Wentao Han, Yang Liu, Xiaoyan Zhu, and Maosong Sun. 2021. Cpm-2: Large-scale cost-effective pre-trained language models. +Ruiqi Zhong, Kristy Lee, Zheng Zhang, and Dan Klein. 2021. Adapting language models for zero-shot learning by meta-tuning on dataset and prompt collections. + +# A Appendix + +# A.1 Datasets + +For fair evaluation of zero-shot generalization, we investigate and collect diverse public Chinese NLP datasets with different task types. The summary of all datasets used in the experiments is presented in Table 8, including train/test task split and metrics of each task. In total, we have 13 task types of public datasets and 6 task types of production datasets. + +# A.1.1 Public Datasets + +- Sentiment Analysis requires the model to determine whether the sentiment of a piece of text is positive or negative. +- News Classification asks the model to predict the topic of a news article. +- Intent Classification asks the model to predict the intent of a person given one of his/her words. +- Machine Reading Comprehension Question Answering requires the model to answer a question given a document where the answer can be derived. +- Natural Language Inference asks the model to tell the relation of two sentences is neutral, entailment or contradiction. +- Sentence Similarity asks the model to predict whether two sentences are similar or not. +- Paraphrase asks the model to tell whether two sentences with much lexical overlap are semantically equivalent. +- Question Answer Matching asks the model to reason whether the given two sentences can form a valid question answering pair. +- Name Entity Recognition requires the model to find all entities in the given piece of text. +- Summarization requires the model to give a summary with one or two sentences of the given long document. +Keywords asks the model to extract keywords from the given sentence. +- Winograd Schema Challenge, the sample of which composes a sentence, a pronoun and an entity in the sentence, requires the model to tell whether the pronoun refers to the entity. + +![](images/8fa9317c9a3ee63fa9af79e1ddf44df1c3fb3b878e53ec0310b6775ecf5a20fd.jpg) +Figure 4: The task taxonomy of the real-world production datasets. The tasks are collected from commercial sales conversations in ten domains, e.g. Auto and Insurance. Task types are marked by different colors. For example, "Profile" is to predict an aspect of customer profile from a given transcribed text, and "Acception" is to judge whether a salesperson follows a certain sales script. + +- App Classification asks the model to tell which type of App the given introduction is about, and there are hundreds of target App categories. + +# A.1.2 Production Datasets + +The task taxonomy of the production datasets is presented in Figure 4, consisting of 6 task types from 10 different domains. As illustrated in Figure 4, the task taxonomy of our production contains six types of natural language understanding tasks. We provide detailed explanation here and several examples in Table 9. + +- **Objective** are datasets that we gathered from production scenario. Objection tasks are language understanding tasks where model will have to analyze whether the speaker is proposing an argument in opposition of the previous contents. +- Profile are datasets that we gathered from realistic industrial scenario. Profile tasks are language understanding tasks similar to intent classification, where model will have to tell whether the current sentence is describing certain intention. +- Mention are also datasets that we gathered from realistic industrial scenario. Mention + +tasks are language understanding tasks that model have to judge whether given sentence mentioned sales keywords. + +- Violation are also datasets that we gathered from realistic industrial scenario. Violation tasks are language understanding tasks that model will have to tell if speaker violates the sales guidelines. +- Acceptance are also datasets that we gathered from realistic industrial scenario. Acceptation tasks are language understanding tasks that let model tell if the speaker follows systems instruction and tell sales keywords to customer. +- Execution are also datasets that we gathered from realistic industrial scenario. Execution tasks are language understanding tasks that model will have to find out whether a salesman follow the predefined sales guidance when talking to customer. + +# A.1.3 Avoid Test Set Contamination + +Although we split datasets into training and testing, there is non-negligible overlap between some of the training datasets and the test set. To avoid test set contamination, we follow the filter method given in (Brown et al., 2020). Specifically, we directly remove all examples in the training phase that have a 30-gram overlap with any example in the test phase. + +# A.2 Metric + +Metrics used for diverse NLP tasks in this paper are presented in the following. + +AUC is the abbreviation of Area Under ROC Curve. Typically, the value of AUC is between 0.5 and 1.0. + +ROUGE is the abbreviation of Recall-Oriented Understudy for Gisting Evaluation, which is an evaluation method oriented to the recall rate of n-grams. We use ROUGE-1 in the paper. + +Micro-F1 is used to evaluate multi-label classification tasks. It is the harmonic average of the averaged precision and recall of all labels. + +F1 measures the overlap between the prediction and the ground truth, which is typically used in span prediction tasks. + +Pos-F1 is customized for NER tasks with a text-to-text form as shown in Table 16. It is the averaged string F1 score for positive samples, of which the true label is not "blank". + +# A.3 Training Details + +In the unsupervised pretraining stage, our base T5 model is pretrained for 100k steps on a 300G web-crawled Chinese corpus with the batch size of 4096 and the sequence length of 512. In the multitask prompted training stage, ZeroPrompt is trained with an Adam Optimizer for 1500 more steps with a batch size of 64 and a learning rate of $3.5\mathrm{e - }5$ . We repeat experiments, including multitask pretraining and finetuning of RoBERTa, T5, five times with different random seeds to reduce variance. + +At the stage of unsupervised pretraining, we apply the span corruption objective, a variant of Masked Language Modeling (MLM), following T5 (Raffel et al., 2020). Meanwhile, we also add MLM as an auxiliary loss to overcome catastrophic forgetting in the multitask pretraining phase. + +$$ +\mathcal {L} = \lambda \cdot \mathcal {L} _ {\text {s u p}} + \mathcal {L} _ {\text {M L M}} \tag {1} +$$ + +The multitask pretraining loss is given in Equation 1, where $\mathcal{L}$ is the overall training loss, $\mathcal{L}_{sup}$ is the multitask supervised loss, $\mathcal{L}_{\mathrm{MLM}}$ is the MLM loss and $\lambda$ is the loss weight. According to Table 18, ZeroPrompt obtains 1.3 point gains by adding the MLM loss, proving our suppose to avoid catastrophic forgetting. + +# A.4 Prompt Design + +In this subsection, we describe the prompt design of our choice and some other tested variants. + +In the simplest form of a prompt template $T$ , the prompting method constructs $T$ by a handcrafted prompt $P$ and the text input sequence $X$ : $T = \{P, X, [\text{MASK}]\}$ where [MASK] is the blank that an answer should be filled in to complete the sentence. This is known as sentence in-filling. + +As illustrated in Figure 5, our optimized prompt $P$ is further decomposed into three parts, $\mathcal{E}$ , $\mathcal{V}$ , and $\mathcal{D}$ , where we have the task-specific soft prompt $\mathcal{E}$ , the verbalizer prompt $\mathcal{V}$ and the task description prompt $\mathcal{D}$ . As a result, our prompt template $T$ could be expressed as: + +$$ +T = \{\mathcal {E}, \mathcal {V}, \mathcal {D}, X, [ \mathrm {M A S K} ] \} \tag {2} +$$ + +To disentangle the task-specific and task-agnostic knowledge in multitask pretraining, we install a continuous prompt embedding as a prefix, which is referred as the task-specific soft prompt shown in Figure 5. + +We first validate the importance of including the task-specific soft prompt and the verbalizer prompt + +Figure 5: The hybrid prompt composed of task-specific soft prompt, verbalizer prompt and task description prompt. +![](images/c5582e583e504ec88ac5a3f82b8971b1359bb596b31b8c5f2054e59b6bde88ac.jpg) +Sample text: The Canon 60D is an 18-megapixel digital SLR camera with a 30inch flip ... +Verbalizer prompt: Tech, Sport, Finance, Entertainment,... +Task description prompt: What is the topic of the following news? +Input: [Task-specific soft prompt placeholders] Tech, Sport, Finance, +Entertainment... What is the topic of the following news?_. Text:The Canon +60D is an 18-megapixel digital SLR camera with a 30inch flip LCD display that +is targeted ... +Output: Tech + +
    AllSeenUnseen
    proposed46.16(↑3.89)46.82(↑2.83)41.57(↑11.4)
    - V42.88(↑0.61)43.87(↓0.12)35.92(↑5.75)
    - E45.06(↑2.79)46.40(↑2.41)35.66(↑5.49)
    - E, V42.2743.9930.17
    + +Table 5: Ablation results on the optimized prompt design. $-\mathcal{V}$ : without the verbalizer prompt; $-\mathcal{E}$ : without the task-specific soft prompt; $-\mathcal{E},\mathcal{V}$ : without the verbalizer prompt and the task-specific soft prompt. + +in our choice of prompt design, and then compare different methods to build new task-specific prompt embeddings. Ablation results on the optimized prompt design are shown in Table 5. We can see that task-specific soft prompts and verbalizer prompts are useful when applied separately, and can obtain an even greater gain of 4 points when applied combined by our ZeroPrompt. + +For unseen tasks, we need to build task-specific soft prompts without any labeled sample. Firstly, we tune a classifier on the mixture of training data to tell the belongings of given texts, and for new samples in the test task, the classifier can predict the similarities of this sample to training tasks. Formally, for pretrained task $i$ , we regard its task-specific prompt embedding as $\mathcal{E}_i$ , the classifier output of training task $i$ 's probability as $prob_i$ . In our experiments, we have tried three methods to build the test task prompt embedding $\mathcal{E}_{new}$ , they are weighted, top1 and random. + +1) weighted. For the weighted, we set $\mathcal{E}_{\text{new}}$ as a weighted average of pretrained task prompt embedding according to the probability, as + +$$ +\mathcal {E} _ {\text {n e w}} = \sum_ {i = 1} ^ {N} \operatorname {p r o b} _ {i} \times \mathcal {E} _ {i} \tag {3} +$$ + +Note that we can do the weighted average on the sample level, as well as the task level. + +2) top1. For the top1, we assign the most similar + +
    noneweighted avgtop1random init
    All44.8346.0146.0646.16
    Seen46.6746.7746.7946.82
    Unseen31.9840.6540.9541.57
    + +Table 6: Ablation results on building new task-specific soft prompt embeddings. + +task prompt embedding to the new task, as + +$$ +\mathcal {E} _ {n e w} = \mathcal {E} _ {k} +$$ + +$$ +\text {w h e r e} k = \arg \max _ {i} \left(\operatorname {p r o b} _ {i}\right), i \in N \tag {4} +$$ + +3) random. For the random, we initialize the task prompt embedding $\mathcal{E}_{\text{new}}$ randomly. + +Ablation results are given in Table 6. Note that for weighted avg and top1 we only report results of per sample, results with all samples are given in Table 19. We can see that the winning approach is surprisingly random init, and the direct uses of similar task prompt embeddings seen in training in various ways are slightly worse than random init, and the worst performing method is none as expected. To comprehend the results on random init and top1, we suppose that different tasks, though with similar input data distributions, still have different mappings $\mathcal{X}\rightarrow y$ . Therefore, it is often difficult to find the most proper task-specific soft prompt seen in the training phase for a new task in the zero-shot learning setting. + +# A.5 Data Retrieval and Self-training + +To fully exploit unsupervised data, we take a self-training framework similar to (Lee et al., 2013; Du et al., 2021). Given a supervised training set $D_{train}$ and an unlabeled dataset $D_{un}$ , we will retrieve task-similar data from unsupervised corpus according to sentence embedding similarity, and the self-training process may repeat several times. For sentence embedding in retrieval, a pretrained BERT is finetuned on both unsupervised and supervised corpus using SimCSE (Gao et al., 2021). + +# Algorithm 1 Self-training + +Require: $\mathcal{M}, D_{\text{un}}, D_{\text{train}}, T$ + +Ensure: $\mathcal{M}^*$ + +1: Init $D_{train}^{*}\gets D_{train}$ +2: for each $t \in [0, T]$ do +3: $\mathcal{M}^* \gets \text{train } M$ on $D_{\text{train}}^{*i}$ +4: for each task $i$ do +5: inference with $\mathcal{M}^*$ on $D_{un}^{i}$ +6: $D_{un}^{*i}\gets$ select samples in $D_{un}^{i}$ which are confident with $\mathcal{M}^*$ and make pseudo label, +7: $D_{train}^{*}\gets D_{train}^{*}\cup D_{un}^{*i}$ +8: end for +9: end for +10: return $\mathcal{M}^*$ ; + +![](images/d111854a9570426d48798365dc7d33677b5fdc6563fb4caf000b3dfaf7a02470.jpg) +Figure 6: Zero-shot performance on NLI and NEWS with different held-out task types. + +The process of self-training is presented in Algorithm 1, where $\mathcal{M}$ is the pretrained model, $T$ is the self-training epoch. For a specific task $i$ , $D_{train}^{i}$ is the training set and $D_{un}^{i}$ is the unlabeled dataset. We note $D_{train}$ as the union of all training datasets and $D_{un}$ as the union of all unlabeled datasets. + +We select new classification and production datasets to study the impact of data retrieval and self-training, considering similar data available in the unsupervised pretraining corpus. Results are summarized in Table 7. Self-training improves the validation set performance of 0.96 and 0.10 for NEWS and production tasks respectively, and improves the test zero-shot performance of 3.90 and 1.23. Self-training shows larger improvement on unseen tasks than training tasks. We explain that pseudo labeled data may increase the diversity of training data, resulting better zero-shot generalization abilities. + +# A.6 Effect of Cross Task Type Transfer + +Following the previous works (Wei et al., 2021; Sanh et al., 2021), we study whether held-out task types can benefit from multitask prompted pretraining. Specifically, we choose NLI and NEWS as testing task types while other various datasets as training task types. We add different training tasks in sequence as shown in Figure 6. For NEWS, the zero-shot performance increases from 17 to 49 by adding INTENT, while adding sentence pair (STS, QAM, PARA) tasks leads to a performance drop in 7 points. Other training task types such as SENTI, SUMM, NER and MRC only have marginal impacts on the performance. For sanity check, we add NEWS in the training phase at last and the performance increases from 50 to 81 as expected. The zero-shot performance on NLI rises from 32 to 37 by adding more sentence pair tasks, and then to 39 with INTENT, but other training tasks do not further boost the performance. In conclusion, we find that the zero-shot performance on held-out task types can only benefit from some task types, and more labeled data in other task clusters do not always guarantee continuous improvement. + +In comparison, our main results on task scaling indicate that performance is boosted when the number of training tasks increases according to the fixed task distribution. Note that task distribution is orthogonal to scaling the task number. How to further improve zero-shot generalization by optimizing task distribution is left to future work. + +# A.7 Hard Prompt Examples + +In this section, we provide details of hard prompts used in this paper. For tasks within each Chinese task cluster, we use similar handcrafted prompts as shown in Table $9\sim 17$ . We use both prefix prompts and cloze prompts. For text classification clusters such as SENTI, NEWS, [X] denotes the sample text. For sentence pair task clusters such as NLI, STS, [X1] denotes the first sample sentence and [X2] is the second sample sentence. For cluster MRC, [X1] denotes the coupus and [X2] denotes the question. For cluster SUM, [X] denotes the coupus, and a similar prompt form is applied for KEYS. For NER, [X1] is the sample text and [X2] denotes the target entity type. For WSC, [X1] is the sample text and [X2] is the pronoun. For all prompts mentioned above, $\mathbf{\Pi}_{-}^{\prime}$ denotes the target position to fill in the answer. + +
    TaskbaselineDev self-trainingbaselineTest self-training
    NEWS AVG86.4987.45 (↑0.96)55.2159.11 (↑3.90)
    production AVG81.8481.94 (↑0.10)78.0879.31 (↑1.23)
    + +Table 7: Experimental results on data retrieval + self-training + +# A.8 Detailed Experimental Results + +Detailed ablation results of each testing task are presented in Table $18\sim 19$ + +
    Task TypeTaskTrainTestMetric
    Sentiment Analysis (SENTI)yf_amazonMicro-F1
    JD_fullMicro-F1
    JD_binaryMicro-F1
    waimai_10kMicro-F1
    online_shopping_10catsAUC
    ChnSentiCorp_htl_allAUC
    nlpcc2014_task2AUC
    weibo_senti_100kAUC
    yf_dianpingMicro-F1
    car_sentimentMicro-F1
    dmscMicro-F1
    simplifyweibo_4Micro-F1
    NLPCC2014_Weibo_Emotion_classificationMicro-F1
    nCoV_100kMicro-F1
    Internet_NewsMicro-F1
    BDCI2019Micro-F1
    SMP2019_ECISAMicro-F1
    News Classification(NEWS)NLPCC2014_LSHT_sampleMicro-F1
    ChinanewsMicro-F1
    CNSSMicro-F1
    CNSEMicro-F1
    THUCNewsMicro-F1
    CCFBDCI2020Micro-F1
    tnews_publicMicro-F1
    IfengMicro-F1
    nlpcc2017_news headlines_categorizationMicro-F1
    Intent Classification(INTENT)nlpcc2018_sluMicro-F1
    catslu_traindevMicro-F1
    e2e_dialsMicro-F1
    intent_classificationMicro-F1
    Natural language inference (NLI)cmnli_publicMicro-F1
    ocnli_publicMicro-F1
    Sentence Similarity (STS)LCQMCAUC
    bq CorpusAUC
    sohu_sts_A_s1AUC
    afqmc_publicAUC
    phoenix_pairAUC
    sohu_sts-A-IIAUC
    sohu-sts-A-ssAUC
    sohu-sts-B-IIAUC
    sohu-sts-B-s1AUC
    sohu-sts-B-ssAUC
    CBLUE-CHIP-STSAUC
    CBLUE-KUAKE-QTRMicro-F1
    CBLUE-KUAKE-QQRMicro-F1
    Paraphrase (PARA)PAWS-XAUC
    Question Answer Matching (QAM)nlpcc2016-dbqaAUC
    Machine Reading Comprehension Question Answering (MRC)c3_publicF1
    DuReader_robustF1
    DuReader_checklistF1
    DuReader_yesnoF1
    dureaderF1
    cmrc2018_publicF1
    DRCDF1
    CCF2020-BDCI-QAF1
    CAIL2019-QAF1
    CAIL2020-QAF1
    Name Entity Recognition (NER)BosonNLP_NER_6CPos-F1
    cluener_publicPos-F1
    RENMIN_NERPos-F1
    msra_nerPos-F1
    weibo_nerPos-F1
    nlpcc2020-AutoIEPos-F1
    CCF2020-BDCI-NERPos-F1
    CMeEEPos-F1
    SanWen-nerPos-F1
    Summarization (SUMM)LCSTSROUGE
    NLPCC2017ROUGE
    SHENCEROUGE
    NLPCC2015ROUGE
    CAIL2020ROUGE
    WANFANGROUGE
    CSL_SUMMROUGE
    EDU_SUMMROUGE
    WEIBOROUGE
    Keywords (KEYS)COTE-BDF1
    COTE-MFWF1
    COTE-DPF1
    Winograd Schema Challenge (WSC)cluewsc2020_publicAUC
    App Classification (APP)iflytek_publicMicro-F1
    Production Datasets800 datasets for trainingAUC
    230 datasets for testingAUC
    + +Table 8: Summary of collected datasets + +
    Task TypePromptslabel
    ObjectionPrompt: 这句话:[X]。上文是否体现了客户对公司不信任?回答:X: 你们是什么公司啊?我从来没听说过你们。Prompt: This sentence: [X]. Does the customer show objection about the company? Answer: X: What kind of company are yours? I have never heard of it.是(Yes)/不是(No)
    ProfilePrompt: 这句话:[X]。客户是在询问用药后的效果吗?回答:X: 吃了以后的主要作用是什么?。Prompt: This sentence: [X]. Is the customer asking about the influences of taking the medicine? Answer: X: What is the main effect after taking this?是(Yes)/不是(No)
    AcceptionPrompt: 关于电子保单查看,“[X1)”上文销售采纳了与系统推荐“[X2]”相似的描述吗?回答:X1: 让我看一下啊这个您电子版保单这块咱们有接收到吗?X2: 您的这个电子保单合同有没有收到呢?Prompt: About electronic insurance policy, Does the salesman say “[X1]” accept the system given expression “[X2]”? Answer: X1: Let me see. Did you received our electronic version of insurance policy? X2: Have you received this electronic policy contract?采纳(Accept)/没有(No)
    ViolationPrompt: 这句话:[X]。上文是否体现了坐席私自承诺客户可以随时退款?回答:X: 如果说觉得感觉不太满意的话,你可以直接申请退款。一个月之内,申请退款。Prompt: This sentence: [X]. Does the customer service privately promise that the customer can refund at any time? Answer: X: If you feel unsatisfied, you can directly apply for a refund. Within one month, apply for a refund.是(Yes)/不是(No)
    MentionPrompt: 关于保单理赔,“[X1]”是销售提及的内容与文本“[X2]”相似吗?回答:X1: 55种轻症疾病和保险公司达成理赔协议之后7到100个工作日,一次性就把这个钱赔给你了。X2: 二级及以上公立医院医生的诊断报告啊就可以申请理赔。保险公司呢是直接一次性给到我们100万块钱去看病了。Prompt: About insurance claim, Does the salesman say “[X1]” mentioned a similar description as “[X2]”? Answer: X1: For 55 mild disease, it will cost 7 to 100 working days after reaching a claim settlement agreement with the insurance company, after that, the money will be paid to you. X2: You can apply for a claim with the diagnosis report of a doctor in a public hospital of level 2 or above. The insurance company will gave you 1 million yuan directly for the disease.相似(similar)/不同(different)
    ExecutionPrompt: 这句话:[X]。上文坐席是否告知客户存在优惠价格?回答:X: 咱们现在也是有优惠活动的,为何不趁着优惠活动把身体调整一下呢?Prompt: This sentence: [X]. Does the salesman told customer there are discount price? Answer: X: We have a discount price right now, why not take a change with this discounts?是(Yes)/不是(No)
    + +Table 9: Illustrations of examples in our production datasets. + +
    Handcrafted +Prompt: “[X]” 这句汽车评论的态度是什么? _. Prompt: "[X]", What is the attitude of this car review?_ +X: 动力还可以因为搭载cvt变速箱起步发动机转速比较好。 +X: Power can also be equipped with a CVT gearbox to start the engine speed is better. +Augmentation +Prompt: “[X]” 如果这个评论的作者是客观的,那么请问,这个评论的内容是什么回答: ? _. Prompt: "[X]", If the author of this comment is objective, what is the content of this comment reply: _ +Verbalizer +积极(Positive)/消极(Negative)
    Target +积极(Positive)
    + +Table 10: Illustrations of prompts in Sentiment Analysis. + +
    Handcrafted +Prompt: 以下这篇新闻是关于什么主题的? _。新闻: [X] +Prompt: What is the topic of the following news?_. News text: [X] +X: 1800万像素单反 佳能60D套机降至9700元 作者: 陈【北京行情】佳能60D(资料 报价 图片 论坛)是一款拥有1800万像素成像能力,搭载3英寸可翻转LCD显示屏,定位于中端市场的数码单反相机。... 作为佳能畅销单反50D的继承者,佳能EOS 60D对于想拥有一台中端单反相机的用户无疑是一个不错的选择。 +X: The Canon 60D is an 18-megapixel digital SLR camera with a 3-inch flip LCD display that is targeted at the mid-market. ... The successor to Canon's best-selling DSLR 50D, the Canon EOS 60D is a good choice for anyone who wants a mid-range DSLR camera. +Augmentation +Prompt: ‘新闻文本’ 是谁写的?回答: _. “[X]” +Prompt: Who wrote the 'news text'? Answer: _. “[X]” +Verbalizer +科技(Technology)/体育(Sport)/财经(Finance)/娱乐(Entertainment)/..
    Target +科技(Technology)
    + +Table 11: Illustrations of prompts in News Classification. + +
    Handcrafted +Prompt: 文章: [X1] 问题: [X2] 回答: _。 +Prompt: Corpus: [X1] Question: [X2] Answer: _ +X1: 微信一天最多能转多少钱:这个没有限制吧,到账时间长。纠正下其他网友的回答,微信转账是有限额的。用微信零钱转账最高可以1W元,用银行卡支付就要以银行的额度为准了,最高可以转账5W元。请采纳哦。 +X2: 微信一天最多能转多少钱? +X1: Micro letter a day how much money can transfer: there is no limit to it, long to the account. To correct other netizens' answers, wechat transfers are limited. The maximum amount can be 1W yuan with wechat change, and the maximum amount can be 5W yuan for bank card payment. Please adopt it. +X2: How much money can wechat transfer at most a day? +Augmentation +Prompt: 他们是怎么猜出来的?文章: [X1] 问题: [X2] 回答: _。 +Prompt: How did they figure that out? Corpus: [X1] Question: [X2] answer: _
    Target +微信转账是有限额的。用微信零钱转账最高可以1W元,用银行卡支付就要以银行的额度为准了,最高可以转账5W元 +Wechat transfers are limited. The maximum amount can be 1W yuan with wechat change, and the maximum amount can be 5W yuan for bank card payment.
    + +Table 12: Illustrations of prompts in Machine Reading Comprehension Question Answering. + +
    Handcrafted +Prompt: 在通用领域中,第一句话: “[X1]”第二句话: “[X2]”的逻辑关系是什么?回答: _。 +Prompt: In the general context, What is the logical relationship between the first sentence "[X1]" and the second sentence "[X2)". Answer: _ +X1: 等他回来,我们就出去吃啊。 +X1: When he gets back, we'll eat out. +X2: 我们在等他。 +X2: We are waiting for him. +Augmentation +Prompt: 这两句话是如何组合在一起的?回答: _。第一句话: “[X1]”,第二句话: “[X2]” +Prompt: How do these two sentences go together? Answer: _. the first sentence: "[X1]", the second sentence: "[X2]". +Verbalizer +相反(contradiction)/中性(neutral)/一致(entailment)
    Target +一致(entailment)
    + +Table 13: Illustrations of prompts in Natural Language Inference. + +
    Handcrafted +Prompt: 在金融领域中,第一句话: “[X1]”第二句话: “[X2]”这两句话含义 _。 +Prompt: In finance context, the first sentence: “[X1]" the second sentence: “[X2]", the meaning of these two sentences is _. +X1: 花呗支持高铁票支付吗? +X1: Does Huabei support high-speed rail ticket payment? +X2: 为什么不支持花呗付款? +X2: Why not support the payment of Huabei? +Augmentation +Prompt: 它们之间的关系是怎样的?回答: _。第一句话: “[X1]”,第二句话: “[X2]” +Prompt: What is the relationship between them? Answer: _, the first sentence: “[X1]", the second sentence: “[X2”. +Verbalizer +相似(similar)/不同(different)
    Target +不同(different)
    + +Table 14: Illustrations of prompts in Sentence Similarity. + +
    Handcrafted +Prompt: 对于句子: [X1] 代词: [X2] 指代的是: [X3] 吗? 回答: _。 +Prompt: In the sentence: [X1], does the pronoun [X2] refer to [X3]? Answer: _ +X1: 满银的老祖上曾经当过“拔贡”。先人手里在这一带有过些名望。到他祖父这代就把一点家业败光了。 +X2: 他 +X3: 满银 +X1: The old ancestor of Manyin used to be "baogong". There was some renown in the hands of our ancestors. By his grandfather's generation the family business had been wiped out. +X2: he +X3: Manyin +Augmentation +Prompt: 第二句话中,有两个“它”: [X1] 其中: [X2]指的_[X3]。 +Prompt: In the second sentence, there are two "it" s: [X1] among this sentence: [X2] refer to [X3]? _ +Verbalizer +是(yes)/不是(no)
    Target +是(yes)
    + +Table 15: Illustrations of prompts in Winograd Schema Chanlenge. + +
    Handcrafted +Prompt: 报纸文本: [X1]中有哪些属于[X2]? 回答 +Prompt: Text from newspaper: Which words of [X1] belong to [X2]? Answer: _ +X1: 相比之下,青岛海牛队和广州松日队的雨中之战虽然也是0:0,但乏善可陈。 +X2: 机构名 +X1: In contrast, although the raining war between Qingdao manatee team and Guangzhou songri team is also 0:0, but it is too lackluster. +X2: organization +Augmentation +Prompt: 回答: _。文本[X1]报纸文本中的[X2]类别的实体是由哪些部分构成的? +Prompt: Answer: _. Text from newspaper: [X1]. Which parts make up the entities of the [X2] category in newspaper text?
    Target +青岛海牛队,广州松日队 +Qingdao manatee team, Guangzhou songri team
    + +Table 16: Illustrations of prompts in Name Entity Recognition. Each example is extended to N instances, where N is the number of possible entity type. For each entity type, we ask the model to predict corresponding entities presented in the given text. The ground truth is "blank" if there is no entity of that type in the sentence. + +
    Handcrafted +Prompt: [X], 这个教育相关的文本的摘要为: _。 +Prompt: [X], A summary of this education-related text: _ +X: 中新网2月25日电 据外媒报道,意大利一名小女孩嘉比是一位动物爱好者,她经常拿自己的零食和家里的剩菜喂乌鸦,因此而收到了乌鸦送的“礼物”。据报道,嘉比经常用花生、狗粮和一些剩菜喂乌鸦,她表示,自己不是为了获得奖励而做这些,而是因为她喜欢自然。最近,乌鸦经常衔一些亮晶晶的东西给她,里面通常是些钮扣、文具和五金之类的小东西,有几次她还收到耳环,乌鸦甚至帮她妈妈把遗失的相机盖找了回去。禽鸟专家表示,乌鸦确实有和人类交朋友的能力,所以乌鸦报恩不是小女孩的想象。 +X: China News on February 25: Gabi, an Italian girl who loves animals, has received a gift from a crow for feeding her snacks and family leftovers, foreign media reported. Gabi reportedly regularly feeds the crows peanuts, dog food and some leftovers, and she said she does not ask a reward but because she loves nature. Lately, they've been bringing her shiny things, usually buttons, stationery and hardware. In a few cases, she's received earrings. They even helped her mother find the cover of a camera she'd lost. According to bird experts, crows do have the ability to make friends with humans, so it's not a little girl's imagination for them to return the favor. +Augmentation +Prompt: [X] 这个领域的领域词典中收录的单词,应该是_。 +Prompt: [X] The words in the domain dictionary of this field should be _.
    Target +意大利女童用零食喂乌鸦,乌鸦送“礼物”报恩 +Talian girl feeds snacks to crows who return kindness with ‘gifts’
    + +Table 17: Illustrations of prompts in Summarization. + +
    Model-ε,VZeroPrompt+MLM
    Total Scores*42.27(0.34)42.88(0.55)45.06(0.69)46.16(0.54)47.43(0.76)
    online_shopping_10cats96.11(0.31)96.06(0.27)95.55(0.31)95.72(0.27)95.90(0.24)
    ChnSentiCorp_htl_all93.80(0.51)93.75(0.57)93.44(0.47)93.45(0.38)93.98(0.55)
    nlpcc2014_task279.05(0.81)80.42(0.49)80.28(0.64)80.12(0.24)80.49(0.41)
    yf_dianping37.27(2.66)37.27(3.85)45.11(5.41)44.87(4.48)43.89(2.51)
    car_sentiment23.98(0.57)30.49(5.57)24.38(1.64)25.80(3.41)25.63(1.70)
    dmssc34.25(2.13)36.94(2.65)37.16(3.73)37.88(2.31)36.97(3.08)
    weibo_senti_100k86.48(0.58)86.39(1.99)84.23(1.00)85.89(1.22)86.48(1.55)
    simplifyweibo_418.70(2.20)20.38(2.23)44.58(1.20)38.87(2.06)42.66(4.60)
    NLPCC2014_Weibo_Emotion_classification37.57(1.39)38.90(1.20)40.56(0.93)41.21(1.08)41.28(1.69)
    nCoV_100k34.11(0.53)33.62(1.59)33.20(2.00)34.82(1.35)34.91(0.49)
    Internet_News53.61(2.23)48.99(1.95)52.42(10.39)55.20(8.58)56.92(2.78)
    BDCI201926.91(5.09)22.53(3.45)29.75(5.22)36.53(5.45)32.81(3.04)
    SMP2019_ECISA38.18(1.25)36.44(1.51)35.71(2.76)38.44(1.87)38.46(0.33)
    THUCNews47.43(2.77)51.45(3.98)66.06(2.14)65.86(2.93)68.66(1.29)
    CCFBDCI202071.92(0.98)69.54(3.55)74.78(4.00)75.93(4.21)80.50(1.68)
    tnews_public35.10(1.14)34.23(3.66)46.67(1.49)46.35(1.50)49.90(1.36)
    Ifeng60.41(1.97)57.96(4.12)61.32(0.94)62.79(1.21)63.04(2.27)
    nlpcc2017_news_headline)categorization33.00(1.67)33.52(2.52)47.56(1.72)47.14(1.37)50.26(1.43)
    catslu_traindev90.79(0.56)91.59(0.80)90.45(0.43)91.33(0.54)90.48(0.78)
    e2e_dials69.20(2.92)67.27(4.14)82.02(2.02)86.39(5.50)88.44(5.28)
    intent_classification20.41(1.05)24.99(0.52)28.47(1.47)34.37(4.38)33.64(3.84)
    ocnli_public45.60(1.19)47.60(0.16)47.70(1.20)47.16(2.09)46.16(1.87)
    afqmc_public63.40(0.79)64.37(0.57)63.63(0.91)63.52(0.88)64.60(0.49)
    phoenix_pair98.90(0.22)99.28(0.30)98.77(0.44)98.99(0.17)98.99(0.24)
    sohu-sts-A-II64.65(0.60)64.04(0.97)64.21(0.50)65.44(0.72)65.92(0.78)
    sohu-sts-A-ss70.91(0.37)71.83(1.56)69.88(1.34)70.70(0.74)70.80(0.46)
    sohu-sts-B-II60.32(1.69)60.03(1.15)60.69(1.24)62.23(1.70)61.47(0.79)
    sohu-sts-B-s165.56(1.69)64.51(1.08)68.08(3.01)68.76(3.09)70.34(0.84)
    sohu-sts-B-ss77.61(1.82)80.05(0.86)79.64(0.80)80.03(0.97)79.85(1.03)
    CBLUE-CHIP-STS75.80(1.21)76.90(0.62)75.91(1.12)75.69(0.38)77.90(0.59)
    CBLUE-KUAKE-QTR26.75(0.57)27.00(0.56)25.97(1.28)26.11(0.77)25.35(1.60)
    CBLUE-KUAKE-QQR43.57(2.03)41.79(3.05)38.47(7.19)41.74(5.35)35.35(8.27)
    PAWS-X53.52(0.64)55.14(0.71)54.19(0.59)54.41(0.99)54.90(0.37)
    nlpcc2016-dbqa63.89(2.07)60.90(0.44)64.24(2.68)62.77(0.80)62.61(3.64)
    cmrc2018_public32.78(2.01)33.24(2.70)34.86(2.32)32.07(1.51)35.50(0.73)
    DRCD44.31(3.45)43.08(2.69)44.81(2.27)43.11(1.91)47.89(2.20)
    CCF2020-BDCI-QA13.05(1.13)13.86(1.73)15.27(0.91)15.15(0.49)16.22(0.56)
    CAIL2019-QA22.25(1.16)21.31(1.11)23.20(0.67)20.61(1.48)22.84(1.61)
    CAIL2020-QA27.90(1.48)24.84(3.29)26.45(1.50)23.64(0.81)26.87(2.14)
    msra_ner57.18(4.84)55.38(6.00)57.88(5.04)60.07(3.97)58.17(4.40)
    weibo_ner22.71(1.95)23.24(0.95)23.16(1.42)23.28(1.62)23.42(0.52)
    nlpcc2020-AutoIE33.65(6.85)30.82(3.52)33.95(3.15)37.17(4.88)35.29(6.25)
    CCF2020-BDCI-NER46.83(2.91)45.45(3.76)48.46(2.37)47.35(3.30)47.34(2.30)
    CMeEE24.87(3.15)21.60(2.08)25.59(3.58)23.93(3.09)24.84(0.94)
    SanWen-ner18.31(1.96)16.72(1.79)19.13(2.85)17.82(1.96)18.42(1.63)
    NLPCC20152.46(0.33)2.47(0.47)2.37(0.27)2.45(0.46)2.78(0.33)
    CAIL20200.86(0.16)0.60(0.16)0.82(0.32)0.77(0.41)0.81(0.05)
    WANFANG5.25(0.24)5.23(0.81)5.44(0.36)5.46(0.42)7.00(0.22)
    CSL_SUMM1.48(0.22)1.82(0.26)1.74(0.47)2.05(0.30)3.35(0.55)
    EDU_SUMM15.50(4.52)14.74(1.89)18.72(0.95)15.04(2.67)14.80(3.15)
    WEIBO4.95(0.94)5.41(0.31)4.95(0.67)4.66(0.65)5.45(0.45)
    COTE-BD6.81(1.61)23.61(7.55)20.79(3.38)40.58(6.56)48.29(9.36)
    COTE-MFW14.38(2.46)32.34(9.76)25.14(4.61)43.81(6.53)50.34(9.01)
    COTE-DP7.94(3.72)18.46(9.97)21.07(4.50)23.89(10.29)42.50(6.43)
    cluewsc2020_public45.66(2.39)42.76(1.40)40.26(1.97)42.06(1.35)47.98(4.18)
    iflytek_public18.99(2.70)18.22(2.51)23.95(3.17)23.45(3.49)26.14(1.02)
    + +Table 18: Detailed ablation results on prompt design and MLM loss + +
    noneweighted avg all samplesweighted avg per sampletop1 per samplerandom init
    ALL44.83(0.55)45.76(0.42)46.01(0.52)46.06(0.55)46.16(0.54)
    online_shopping_10cats95.49(0.30)95.73(0.27)95.73(0.27)95.73(0.27)95.72(0.27)
    ChnSentiCorp_htl_all92.92(0.51)93.51(0.37)93.42(0.37)93.43(0.35)93.45(0.38)
    nlpcc2014_task279.90(0.29)80.14(0.24)80.14(0.23)80.13(0.24)80.12(0.24)
    yf_dianping44.80(4.49)44.63(4.68)44.66(4.65)44.63(4.66)44.87(4.48)
    car_sentiment24.44(1.81)25.74(3.38)25.73(3.37)25.79(3.37)25.80(3.41)
    dmsc38.21(2.38)37.77(2.48)37.81(2.30)37.90(2.27)37.88(2.31)
    weibo_senti_100k85.21(1.31)85.45(0.94)85.95(1.22)85.91(1.23)85.89(1.22)
    simplifyweibo_439.54(3.07)38.01(1.78)38.67(1.76)38.78(1.79)38.87(2.06)
    NLPCC2014_Weibo_Emotion_classification40.41(1.06)41.23(1.18)41.19(0.87)41.22(0.94)41.21(1.08)
    nCoV_100k34.46(1.51)34.86(1.32)34.80(1.34)34.82(1.38)34.82(1.35)
    Internet_News55.32(8.07)55.12(8.58)55.10(8.55)55.19(8.58)55.20(8.58)
    BDCI201935.69(5.31)36.29(5.45)36.46(5.43)36.52(5.42)36.53(5.45)
    SMP2019_ECISA37.63(2.15)38.49(1.90)38.51(1.88)38.51(1.87)38.44(1.87)
    THUCNews65.58(3.27)65.90(2.91)65.89(2.91)65.87(2.91)65.86(2.93)
    CCFBDCI202075.61(4.08)75.98(3.87)75.86(4.13)75.83(4.20)75.93(4.21)
    tnews_public46.04(1.26)46.42(1.38)46.36(1.42)46.32(1.42)46.35(1.50)
    Ifeng63.66(1.44)62.78(1.20)62.77(1.21)62.77(1.18)62.79(1.21)
    nlpcc2017_news headlines_categoryization46.95(1.36)47.15(1.27)47.16(1.31)47.14(1.29)47.14(1.37)
    catslu_traindev90.55(0.74)91.52(0.39)91.57(0.42)91.52(0.39)91.33(0.54)
    e2e_dials88.24(5.05)86.38(5.55)86.36(5.50)86.44(5.53)86.39(5.50)
    intent_classification32.04(3.89)34.37(4.37)34.34(4.39)34.37(4.37)34.37(4.38)
    ocnli_public46.98(1.96)47.34(1.99)47.21(2.06)47.17(2.01)47.16(2.09)
    afqmc_public62.96(0.92)63.51(0.87)63.50(0.86)63.50(0.86)63.52(0.88)
    phoenix_pair97.92(0.98)98.99(0.20)98.98(0.20)98.99(0.20)98.99(0.17)
    sohu-sts-A-ll64.97(0.57)65.47(0.72)65.47(0.73)65.46(0.72)65.44(0.72)
    sohu-sts-A-ss70.19(0.89)70.80(0.67)70.73(0.70)70.72(0.74)70.70(0.74)
    sohu-sts-B-II61.81(1.39)62.23(1.64)62.22(1.61)62.22(1.64)62.23(1.70)
    sohu-sts-B-s168.48(2.57)68.77(3.11)68.77(3.11)68.76(3.11)68.76(3.09)
    sohu-sts-B-ss79.77(0.78)80.00(0.99)79.99(0.94)80.01(0.96)80.03(0.97)
    CBLUE-CHIP-STS74.93(0.51)75.66(0.36)75.67(0.36)75.67(0.36)75.69(0.38)
    CBLUE-KUAKE-QTR25.73(0.85)26.11(0.85)26.14(0.86)26.12(0.84)26.11(0.77)
    CBLUE-KUAKE-QQR41.09(6.06)41.62(5.20)41.70(5.22)41.62(5.21)41.74(5.35)
    PAWS-X54.48(1.11)54.39(0.96)54.40(0.96)54.39(0.96)54.41(0.99)
    nlpcc2016-dbqa59.45(2.65)62.86(0.87)62.81(0.93)62.84(0.87)62.77(0.80)
    cmrc2018_public34.43(1.64)32.00(1.54)31.94(1.54)31.90(1.54)32.07(1.51)
    DRCD42.99(3.90)42.48(2.52)42.57(2.50)42.50(2.50)43.11(1.91)
    CCF2020-BDCI-QA16.20(1.02)14.96(0.53)14.99(0.54)15.15(0.69)15.15(0.49)
    CAIL2019-QA20.88(2.19)20.29(1.32)20.52(1.47)20.58(1.54)20.61(1.48)
    CAIL2020-QA22.62(2.14)23.29(0.84)23.43(0.61)23.61(0.63)23.64(0.81)
    msraNER60.67(4.12)60.05(4.45)60.08(4.30)60.00(4.13)60.07(3.97)
    weiboNER23.20(1.60)23.36(1.72)23.47(1.80)23.48(1.72)23.28(1.62)
    nlpcc2020-AutoIE38.95(6.31)35.92(4.59)36.88(4.98)36.78(4.95)37.17(4.88)
    CCF2020-BDCI-NER47.51(4.18)47.28(3.68)47.35(3.40)47.47(3.31)47.35(3.30)
    CMeEE21.25(2.78)24.26(3.27)24.18(3.23)23.80(3.11)23.93(3.09)
    SanWen-ner18.26(1.91)17.80(2.06)17.85(2.03)17.90(1.93)17.82(1.96)
    NLPCC20152.05(0.33)2.41(0.42)2.37(0.44)2.55(0.44)2.45(0.46)
    CAIL20200.79(0.39)0.74(0.42)0.77(0.42)0.81(0.45)0.77(0.41)
    WANFANG5.64(0.52)5.30(0.38)5.32(0.32)5.39(0.47)5.46(0.42)
    CSL_SUMM1.69(0.37)1.89(0.25)1.84(0.24)1.91(0.33)2.05(0.30)
    EDU_SUMM16.81(1.73)13.71(2.73)14.80(2.94)15.10(2.87)15.04(2.67)
    WEIBO5.40(0.88)4.61(0.62)4.63(0.62)4.68(0.65)4.66(0.65)
    COTE-BD14.62(4.81)26.80(4.97)38.13(6.50)39.09(7.09)40.58(6.56)
    COTE-MFW16.35(5.31)41.65(8.03)40.64(7.40)41.65(7.63)43.81(6.53)
    COTE-DP12.21(7.17)22.62(10.85)22.69(10.79)22.80(11.12)23.89(10.29)
    cluewsc2020_public43.11(0.63)42.50(1.41)42.50(1.41)42.50(1.41)42.06(1.35)
    iflytek_public23.61(3.30)23.39(3.50)23.39(3.51)23.37(3.41)23.45(3.49)
    + +Table 19: Detailed ablation results on building new task-specific soft prompts \ No newline at end of file diff --git a/zeropromptscalingpromptbasedpretrainingto1000tasksimproveszeroshotgeneralization/images.zip b/zeropromptscalingpromptbasedpretrainingto1000tasksimproveszeroshotgeneralization/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..17081076cbfab2c7c6712cef346cfcd95cc39105 --- /dev/null +++ b/zeropromptscalingpromptbasedpretrainingto1000tasksimproveszeroshotgeneralization/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:adf825e42d50ef4d5625aebfae3dd2596c4d1d648f3cc9ea67a8000069c5cc5a +size 2488627 diff --git a/zeropromptscalingpromptbasedpretrainingto1000tasksimproveszeroshotgeneralization/layout.json b/zeropromptscalingpromptbasedpretrainingto1000tasksimproveszeroshotgeneralization/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..621a69d132480099f8bdb14549fb122ed4dc67e7 --- /dev/null +++ b/zeropromptscalingpromptbasedpretrainingto1000tasksimproveszeroshotgeneralization/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:905a3642de0ef07a918cbdbe1a2637b5b54b92939a86198ece17f868be2d4c7b +size 433973